Quantcast
Channel: Intel Communities : All Content - Wired Ethernet
Viewing all 4566 articles
Browse latest View live

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.


I can't install Intel WiDi latest build 4.2.29.0 on Windows 8.1

$
0
0

Hi, when I try to install on my NUC D34010WYKH2 + Intel Dual Band Wireless-AC 7260 the latest build of WiDi program on Windows 8.1, I get the message that the OS isn't compatible! How it's possible? 8.1 is certified for work with WiDi, I ask someone for help, please.

How can I get Intel HyperScan?

$
0
0

Hello everyone,

 

I have read Intel's papers on HyperScan and I was quite impressed by its literal performance.The only problem is that I could not find that library anywhere!

 

Could anybody show me how to get that library?

 

Thanks

Dual port 10Gbe performing at half speed

$
0
0

I have two servers running OpenSUSE both with  Intel dual port 82599EB 10-Gigabit SFI/SFP+ using the ixgbe drivers. And I'm not getting the bandwidth I expect with iperf. Both ports are connected directly to the the other two ports of the second server.

 

On one machine I run

 

iperf -s

 

On the other machine I run these two commands in separate terminals.

 

iperf -c 192.168.1.10 -t 20 -B 192.168.1.20

iperf -c 192.168.1.11 -t 20 -B 192.168.1.21

 

And I get

 

[  4]  0.0-20.0 sec  7.63 GBytes  3.28 Gbits/sec

[  5]  0.0-20.0 sec  14.7 GBytes  6.30 Gbits/sec

 

If I run only one port, I get

 

[  6]  0.0-20.0 sec  22.8 GBytes  9.80 Gbits/sec

 

Shouldn't I be able to expect roughly 10Gbe on each port simultaneously? Do I have the wrong hardware for that requirement? Or an invalid test?

openstack vm and sr-iov vs non-sriov performance

$
0
0

Hello

 

I am doing performance testing with VM using open stack and sr-iov technology.

 

Test scenario info:

  • HOST TO VM
  • 4 type of packet (64,100,220,1500)
  • The time for each run is 10s.
  • TCP packet
  • VM is having Ubuntu OS and 768 MB RAM
  • iperf is used to calculate bandwidth

 

Non-SRIOV Test - performed with 6 VMs running (Using neutron + openvswitch plugin )

SRIOV Test – performed with 3 VMs running. (Using neutron + SR-IOV port. NIC info: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2)

 

performance.PNG

 

Performance bandwidth for both case 'WIthout SR-IOV' and 'SR-IOV' are almost same.

So i would like to know in which scenario SR-IOV will give better performance.

if you feel we are missing some thing in test result/ test scenario , then let me know.

 

Thanks

Ranjit Ranjan

Intel® 82579LM UDP multicast packetloss

$
0
0

I currently have a high speed trading network setup and the machines that have the Intel® 82579LM integrated are experiencing insane packetloss. Started off with 2000 packets lost per 10 minutes. I have tried all the solutions suggested within the previous threads about this issue. I have updated the BIOS. The machines we use are HP - z220(230). Any machines that use a different model of intel NIC`s dont experience this problem. I have installed the latest drivers and Tweaked the power options on the nics down to nothing. I also increased the input and output buffer size to the maximum setting which does give me a smaller packet loss rate (70 packets per 10 min) but its still not good enough as this is a trading network we cannont afford packet loss. I have done extensive troubleshooting on our network and the CISCO switches we use. They are all in top shape . The issue is within the Intel® 82579LM nic itself and how it reacts to UDP multicast traffic.

 

Any suggestions i can implement to lessen the packet loss count on the network? with so many of those nics on the floor im dropping rougly 900000 packets a day it is way to much.

 

P.S. Just as an example the Intel I217-LM in the last 24 hours has dropped 15 packets comparing to the Intel® 82579LM dropping roughly 9-10k packets per 24 hours.

PCIe pass-through /w vfio iommu_group limitation

$
0
0

Dear SR-IOV and 82599 experts,

 

I’m testing KVM device(pci) pass-through with new 3.x kernel (RHEL 7.0)

It seems new device pass-through framework, VFIO, has been merged since kernel 3.6.

Before VFIO (prior pci-stub) we could simply assign each PF or VF to different guest VM freely but with VFIO, kernel strictly prohibit assigning devices to different guest (or host) in case they are in same iommu_group. I know it is required because security and PCIe ACS(Access Control Services) can avoid this limitation.

 

Attachment is about test environment and in this file you can see the test machine definitely support PCIe ACS. So I expected both of VFs and PFs are splitted to different iommu_group so that they can be assigned to different guest VM each.

Against expectation, the two PF on a single X520-SR2 are grouped in same iommu_group while VFs are splitted as I expected. It means I cannot assign dual port NIC to two guest VM each. Is this correct?

 

It is not easy to understand why PFs are tied in the same iommu_group while VFs are not.

Please someone help me to understand why this limitation still remains and let me know if there is any way I can avoid this.

 

For performance reason, I need PF pass-through I don't consider VF.

 

Thank you.

Minho Ban

i350 "V2"


I210-T1 / I350-T2 power consumption?

$
0
0

Hi,

 

i want to know the power consumption of the I210-T1 and I350-T2 adapters cause I'm not sure

if it's right what the datasheed says.

 

Intel writes 0.81 Watt for the "Intel Ethernet Server Adapter I210-T1" but the "Intel Ethernet Server Adapter I350-T2" needs 4.40 Watt?

And then there is from HP the "E0X95AA Intel Ethernet I210-T1 GbE" where HP writes 3.00 Watt.

I believe the Intel and HP cards are nearly the same, so why that big difference.

Also i can use 5 "Intel Ethernet Server Adapter I210-T1" and still have lower power consumption then with 1 I350-T2?

Does Intel support SFP+ optics with dual RX?

$
0
0

We would like to monitor 2 10Gbps signals with single SFP+ optic, is it possible? 

We are using R720 Dell servers with Intel NICs.

 

Thanks in advance,

                     Igal.

Rate limiting on Virtual Function

$
0
0

Hi,

I am using Intel 10 Gig Dual-Port Ethernet PCI Card (BN8110470) ixgbe driver on Ubuntu 14.04 Host.

I was able to create VF's which are listed below from lspci command:

 

02:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)

02:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)

03:10.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:10.1 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:10.2 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:10.3 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:10.4 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:10.5 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:10.6 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:10.7 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:11.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:11.1 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:11.2 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:11.3 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:11.4 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

03:11.5 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)

 

The VF's are also listed from ip link command:

 

20: rename20: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000

    link/ether a0:36:9f:11:b8:10 brd ff:ff:ff:ff:ff:ff

    vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 5 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

 

28: rename28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000

    link/ether a0:36:9f:11:b8:12 brd ff:ff:ff:ff:ff:ff

    vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 5 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 6 MAC 00:00:00:00:00:00, vlan 10, spoof checking on, link-state auto

 

As visible from dev rename28, vf 6, assigning vlan is working as expected,

ip link set rename28 vf 6 vlan 10

But setting rate limit on same VF is working,

sudo ip link set rename28 vf 6 rate 250

RTNETLINK answers: Invalid argument

 

The guide used to realize this configuration is here, Configure QoS with Intel® Flexible Port Partitioning

I know that I should be using the latest iproute2, I am using iproute2-ss131122 which is never version.

Windows 10 Enterprise Preview

$
0
0

Has anyone been able to get multiple VLANs to work in the W10 previews?

I have tried multiple builds with the last 3 versions of Proset and can not enable the VLANs after creation.

I can create them as I do in 8.1 Pro, but I can not enable them.

I am using an 82579LM nic in a Lenovo Thinkcentre.

Cable unplugged problem with I218-V

$
0
0

Hi, I have been having a network problem with my new motherboard MSI H81M ECO, it uses an Intel I218-V. I use this new computer as a server for my home. Everything is working but after a while the NIC will stop working and when I go to the Network Connections on Windows 8.1 it says cable unplugged. I have to disable the device and reenable it to make it work, not the best solution for a home server.

 

I tried the latest driver from the MSI website and from Intel, tried 3 cables CAT 5e but nothing is working. Here is the XML from the event when it is disconnected:

 

- <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System>  <Provider Name="e1dexpress" />  <EventID Qualifiers="40964">27</EventID>  <Level>3</Level>  <Task>0</Task>  <Keywords>0x80000000000000</Keywords>  <TimeCreated SystemTime="2015-01-28T13:06:10.229487700Z" />  <EventRecordID>1864</EventRecordID>  <Channel>System</Channel>  <Computer>Aristote</Computer>  <Security />  </System>
- <EventData>  <Data />  <Data>Intel(R) Ethernet Connection (2) I218-V</Data>  <Binary>0000040002003000000000001B0004A00000000000000000000000000000000000000000000000001B0004A0</Binary>  </EventData>  </Event>

 

Thanks

Intel(R) 82579V Gigabit Network device issues

$
0
0

Dear.

 

I have recently bought the new sandy bridge core i5 machine and been trying to install Win SBS 2008, but during the process, it asked me for the driver for the ethernet adaptor. I cannot find any whatever online or the driver CD. Can anyone help me to locate a Intel(R) 82579V Gigabit Network driver for Win SBS 2008 please?

 

Thanks a lot

Larry

Not getting traffic in promisc sriov

$
0
0

Hi

 

1. Is this possible to do what i am trying to do?

     a. If yes, what is my problem

     b. sr-iov is only to use with vlan?

 

I am using Intel Corporation 82599ES 10-Gigabit SFI/SFP+

ethtool -i eth6

driver: ixgbe

version: 3.19.1-k

firmware-version: 0x80000597

bus-info: 0000:15:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

 

 

 

 

I am trying to do passthrough to guest using sriov and libvirt

I did  assignment from a pool of SRIOV VFs in a libvirt <network> definition as following

 

<network>

   <name>passthrough-eth6</name>

   <forward mode='hostdev' managed='yes'>

     <pf dev='eth6'/>

   </forward>

</network>

 

I launch the vm, there is a vm with virt-install …. --network network=passthrough-eth6

I inject traffic with a generator so I defined it in promisc mode without an ip address.(ifconfig eth6 promisc up)

In my physical device eth6 i can see RX increasing. But in the guest side nothing arrive in the RX? Why?

I don’t know in which log to look at to understand the problem

I defined the  network-script/eth1 file in the guest is defined as static (not dhcp) as the interface it is connected to in host have no ip.

I have a another interface on the host connected via ovs and libvirt and it is working fine – you can see it in log as vnet0(thanks to  this mailing-list help).

 

 

In the guest part I attached the log but I think interesting part is

Jan 27 17:03:21 localhost kernel: ixgbevf 0000:00:04.0: NIC Link is Up, 10 Gbps

 

Here is the log from the host side, I can’t see an error. /var/log/message    

Jan 27 12:02:29 1235 avahi-daemon[2598]: Withdrawing address record for fe80::fc54:ff:fe92:7352 on vnet0.

Jan 27 12:02:29 1235 kernel: device vnet0 left promiscuous mode

Jan 27 12:02:30 1235 kernel: ixgbe 0000:15:00.0: setting MAC a2:fc:a8:89:2a:8f on VF 0

 

Jan 27 12:02:30 1235 ovs-vsctl: 00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port vnet0

Jan 27 12:02:30 1235 ovs-vswitchd: 00180|bridge|ERR|bridge virbr0: mirror mirror-1-to-3 does not specify output; ignoring

Jan 27 12:02:31 1235 ntpd[2905]: Deleting interface #28 vnet0, fe80::fc54:ff:fe92:7352#123, interface stats: received=0, sent=0, dropped=0, active_time=981 secs

Jan 27 12:02:31 1235 ntpd[2905]: peers refreshed

Jan 27 12:02:43 1235 kernel: ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

Jan 27 12:02:43 1235 kernel: kvm: 7957: cpu0 unhandled rdmsr: 0x345

Jan 27 12:02:43 1235 kernel: kvm: 7957: cpu0 unhandled wrmsr: 0x680 data 0

 

 

Jan 27 12:02:51 1235 ovs-vsctl: 00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --may-exist add-port br3 vnet0 -- set Interface vnet0 "external-ids:attached-mac=\"52:54:00:43:BF:8C\"" -- set Interface vnet0 "external-ids:iface-id=\"309ed431-417f-2843-89b8-179a066aa595\"" -- set Interface vnet0 "external-ids:vm-id=\"870e3b65-953b-01ee-84a3-c2d05300c2d9\"" -- set Interface vnet0 external-ids:iface-status=active

Jan 27 12:02:51 1235 ovs-vswitchd: 00183|bridge|ERR|bridge virbr0: mirror mirror-1-to-3 does not specify output; ignoring

Jan 27 12:02:51 1235 kernel: device vnet0 entered promiscuous mode

Jan 27 12:02:51 1235 kernel: ixgbe 0000:15:00.0: setting MAC 52:54:00:d9:f1:bd on VF 0

Jan 27 12:02:51 1235 kernel: ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

Jan 27 12:02:51 1235 kernel: pci-stub 0000:15:10.0: enabling device (0000 -> 0002)

Jan 27 12:02:52 1235 avahi-daemon[2598]: Registering new address record for fe80::fc54:ff:fe43:bf8c on vnet0.*

Jan 27 12:02:54 1235 ntpd[2905]: Listen normally on 29 vnet0 fe80::fc54:ff:fe43:bf8c UDP 123

Jan 27 12:02:54 1235 ntpd[2905]: peers refreshed

Jan 27 12:03:17 1235 kernel: __ratelimit: 23 callbacks suppressed

Jan 27 12:03:17 1235 kernel: kvm: 8006: cpu0 disabled perfctr wrmsr: 0xc1 data 0xabcd

Jan 27 12:03:20 1235 kernel: ixgbe 0000:15:00.0: eth6: VF Reset msg received from vf 0

 

 

 

 

 

 

 

 

 

And this is the /var/log/libvirt/qemu/probe8.log from the host

 

2015-01-27 17:02:51.762+0000: starting up

 

LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name probe8 -S -M rhel6.6.0 -enable-kvm -m 4096 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 870e3b65-953b-01ee-84a3-c2d05300c2d9 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/probe8.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -drive file=/data/KVM/comp_VNF_MaveriQ_Probe2-cloud8.qcow2,if=none,id=drive-ide0-0-0,format=qcow2,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/var/lib/libvirt/images/configuration8.iso,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=25,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:43:bf:8c,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 0.0.0.0:0 -vga cirrus -device pci-assign,host=15:10.0,id=hostdev0,configfd=26,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on

 

char device redirected to /dev/pts/1

 

 

 

I can see that virtual function succeeded because I got the following output

 

virsh net-dumpxml  passthrough-eth6

 

<network connections='1'>

  <name>passthrough-eth6</name>

  <uuid>f8fd37cd-3215-e4b0-17b1-0b5515140db9</uuid>

  <forward mode='hostdev' managed='yes'>

    <pf dev='eth6'/>

    <address type='pci' domain='0x0000' bus='0x15' slot='0x10' function='0x0'/>

    <address type='pci' domain='0x0000' bus='0x15' slot='0x10' function='0x2'/>

    <address type='pci' domain='0x0000' bus='0x15' slot='0x10' function='0x4'/>

    <address type='pci' domain='0x0000' bus='0x15' slot='0x10' function='0x6'/>

    …

  </forward>

</network>

 

 

 

lspci | grep Intel

15:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

15:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

15:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

15:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

….

 

 

 

 

 

And Intel-IOMMU: enabled in the dmesg

From host dmesg

ixgbe 0000:15:00.0: setting MAC 52:54:00:97:c6:86 on VF 0

ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

device vnet0 entered promiscuous mode

pci-stub 0000:15:10.0: enabling device (0000 -> 0002)

assign device 0:15:10.0

vnet0: no IPv6 routers present

__ratelimit: 23 callbacks suppressed

kvm: 6896: cpu0 disabled perfctr wrmsr: 0xc1 data 0xabcd

ixgbe 0000:15:00.0: eth6: VF Reset msg received from vf 0

pci-stub 0000:15:10.0: irq 62 for MSI/MSI-X

pci-stub 0000:15:10.0: irq 63 for MSI/MSI-X

pci-stub 0000:15:10.0: irq 62 for MSI/MSI-X

pci-stub 0000:15:10.0: irq 63 for MSI/MSI-X

ixgbe 0000:15:00.0: eth6: VF Reset msg received from vf 3

pci-stub 0000:15:10.6: claimed by stub

device vnet0 left promiscuous mode

pci-stub 0000:15:10.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100004)

ixgbe 0000:15:00.0: setting MAC a2:fc:a8:89:2a:8f on VF 0

ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

kvm: 7207: cpu0 unhandled rdmsr: 0x345

kvm: 7207: cpu0 unhandled wrmsr: 0x680 data 0

ixgbe 0000:15:00.0: setting MAC 52:54:00:16:b6:0c on VF 3

ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

device vnet0 entered promiscuous mode

pci-stub 0000:15:10.6: enabling device (0000 -> 0002)

assign device 0:15:10.6

vnet0: no IPv6 routers present

__ratelimit: 24 callbacks suppressed

kvm: 7254: cpu0 disabled perfctr wrmsr: 0xc1 data 0xabcd

ixgbe 0000:15:00.0: eth6: VF Reset msg received from vf 3

pci-stub 0000:15:10.6: irq 62 for MSI/MSI-X

pci-stub 0000:15:10.6: irq 63 for MSI/MSI-X

pci-stub 0000:15:10.6: irq 62 for MSI/MSI-X

pci-stub 0000:15:10.6: irq 63 for MSI/MSI-X

device vnet0 left promiscuous mode

pci-stub 0000:15:10.6: restoring config space at offset 0x1 (was 0x100000, writing 0x100004)

ixgbe 0000:15:00.0: setting MAC ba:c1:bd:19:c1:d5 on VF 3

ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

device vnet0 entered promiscuous mode

ixgbe 0000:15:00.0: setting MAC 52:54:00:e2:39:62 on VF 0

ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

pci-stub 0000:15:10.0: enabling device (0000 -> 0002)

assign device 0:15:10.0

vnet0: no IPv6 routers present

kvm: 7639: cpu0 disabled perfctr wrmsr: 0xc1 data 0xabcd

ixgbe 0000:15:00.0: eth6: VF Reset msg received from vf 0

pci-stub 0000:15:10.0: irq 62 for MSI/MSI-X

pci-stub 0000:15:10.0: irq 63 for MSI/MSI-X

pci-stub 0000:15:10.0: irq 62 for MSI/MSI-X

pci-stub 0000:15:10.0: irq 63 for MSI/MSI-X

device vnet0 left promiscuous mode

pci-stub 0000:15:10.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100004)ixgbe 0000:15:00.0: setting MAC a2:fc:a8:89:2a:8f on VF 0

ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

kvm: 7957: cpu0 unhandled rdmsr: 0x345

kvm: 7957: cpu0 unhandled wrmsr: 0x680 data 0

device vnet0 entered promiscuous mode

ixgbe 0000:15:00.0: setting MAC 52:54:00:d9:f1:bd on VF 0

ixgbe 0000:15:00.0: Reload the VF driver to make this change effective.

pci-stub 0000:15:10.0: enabling device (0000 -> 0002)

assign device 0:15:10.0

vnet0: no IPv6 routers present

__ratelimit: 23 callbacks suppressed

kvm: 8006: cpu0 disabled perfctr wrmsr: 0xc1 data 0xabcd

ixgbe 0000:15:00.0: eth6: VF Reset msg received from vf 0

pci-stub 0000:15:10.0: irq 62 for MSI/MSI-X


Linux ixgbe and multiq/mqprio

$
0
0

We have been using the Intel X520 series for a couple of years now, and have seen an evolution in the way PFC is handled by the driver and Linux. But with our cards, we are unable to get this working in RHEL 6.4/6.5.

We want to give traffic to different destination IP's different PCP values(from the same application). The oldest implementation we used was based on a multiq qdisc and then steer traffic to different queues with tc filters action skbedit queue_mapping. This only worked in the beginning(2.6.18 kernels), later we had to adjust the 'action queue_mapping' to 'action skbedit'. We still enable PFC(DCB) using DCBtool like we alyways did.

A big difference that I see is the number of enabled queues. First, only 8 queues were activated, but in RHEL 6.4/6.5 71 queues are activated. In some configurations, the switch even sends PFC pause frames, but these are not honored by the card resulting in packet loss.

I have also installed RHEL7, which has the same issues with multiq. But on RHEL 7, I can enable the mqprio qdisc(which exists in RHEL 6.5, but I can't enable it with hardware support so it's useless). Then with mqprio, the tc filtering doesn't work anymore, and the only option I currently have is to set the SO_PRIORITY of the socket in the application(which is not really an option). When this is set, PFC starts to work again and I don't have packet-loss.

Is there any way to have this working again in RHEL 6.4/6.5/(6.6) and configurable in RHEL 7?

82579V network adapter not recognized by PROset software

$
0
0

Details:

Windows 7 64 bit

Gigabyte GA-Z77x-up5 th motherboard  (F4 bios)

Recently installed 7260 ac HMWG wireless card. 17.1 driver Working fine.  15 MBS

 

Problem: The network interface (wired) was only running about 20MBS and the 82579V adapter Advanced properties tab was blank, so

I decided to uninstall the driver and do a reinstall.  First the 19.5 driver, then the 17.2 from Gigabyte, then the 11.13 driver from the

install CD from Gigabyte.  All report back: "Cannot install drivers.  No Intel adapters are present in this computer." 

Intel i218-LM link speed LED states?

$
0
0

I am using the Intel i218-LM ethernet controller. 

 

When I disconnect the ethernet cable the link speed LED pins for both LINK100# and LINK1000# go ACTIVE LOW.

 

Why are the LED pins being driven ACTIVE LOW when a 100 or 1000 link cannot possibly exist as no cable is connected?  I would have expected the lines to be INACTIVE HIGH.

Is this behavior normal for Intel controllers?

X540-T2 Overheating

$
0
0

I have 3 HP Z800 workstations I am installing X540-T2 card into which are teamed to 802.3ad.

 

The first PC has installed fine and running smooth.

 

PC 2 and 3 the cards installed fine and work for about 5 minutes and then shut off due to overheating. The syncs are noticeably hot to touch.

 

The only difference spec wise is PC 1 has a mid range card where PC 2 and PC 3 have Nvidia GTX 980 cards and sata cards installed in each.

 

I have taken the cards out and put them into another Z800 box that is similar to PC 1 and have not had any issues overheating. Though I was only able to test over a 1gb connection.

 

I am fairly certain the cards were seated correctly in all instances.

 

I have the most current drivers/firmware for the X540-T2 cards installed.

 

Cards are installed into the last slot, closet to the bottom of the case. I believe this is slot 8.

 

Looking for any help or guidance as to where/how to check temperatures and power to the cards. Or any other information that might help me control this overheating issue.

 

Thank you in advance!

Named VLAN ID

$
0
0

I need to create a tagged and untagged virtual NIC however I need for the VLAN ID of the tagged to be the VLAN name and not the number.  Is it possible to do this or are only VLAN numbers allowed in the ID field?

Viewing all 4566 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>