Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high and low update rate #845

Open
tuxembedded opened this issue Aug 6, 2024 · 5 comments
Open

high and low update rate #845

tuxembedded opened this issue Aug 6, 2024 · 5 comments

Comments

@tuxembedded
Copy link

Good morning!

We try to get two different slave groups working:
(1) 20 Bytes process image and 10kHz update rate
(2) 1kByte process image and 100-1000Hz update rate
We managed to get (1) and (2) working, but isolated from each other: when using (1) the other group (2) isn't connected to EtherCAT.

But if we start using two different groups with different update rates, like explained here:

EtherCAT slave groups

Slave groups can be used to group slaves into separate logic groups within an EtherCAT network. Each group will have its own logic address space mapped to an IOmap address and make it possible to send and receive process data at different update rate.

We only get 4,4kHz for (1) and 3,2kHz for (2). What did we try:

  • change the order how the slaves are connected to the bus so that the fast slaves are directly connected with the master. No effort.
  • use only one group "1" for (1) and use group "0" for both (1) and (2). No effort.
    For us it seems that when exchanging data with (1) there is also 1kB+20Byte process data exchanged which is too much for the machine. We only want to exchange 20 Bytes process data when doing the (1) cycle. Is this possible?

This one says (in table below Process Data Exchange):
Having different cycle tasks (multiple update rates for PDO) isn't possible with SOEM ...

Any hint would be very helpful.

Thank you very much!

Daniel

@ArthurKetels
Copy link
Contributor

A wireshark capture of the slaves running in two groups will help identifying the issue. It could well be there is a timing issue in the linux network stack for your hardware.

@tuxembedded
Copy link
Author

OK, I'll try to be in the lab tomorrow where everything is homed to create the wireshark capture. Thanks so far!

@tuxembedded
Copy link
Author

OK, here comes the data. Slaveinfo:

SOEM (Simple Open EtherCAT Master)
Slaveinfo
Starting slaveinfo
ec_init on ethernet0 succeeded.
2 slaves found and configured.
Calculated workcounter 6

Slave:1
 Name:AX
 Output size: 4024bits
 Input size: 4024bits
 State: 4
 Delay: 0[ns]
 Has DC: 1
 DCParentport:0
 Activeports:1.1.0.0
 Configured address: 1001
 Man: 00000001 ID: 80000800 Rev: 00112000
 SM0 A:1000 L: 128 F:00010026 Type:1
 SM1 A:1080 L: 128 F:00010022 Type:2
 SM2 A:1100 L: 503 F:00010064 Type:3
 SM3 A:16e8 L: 503 F:00010020 Type:4
 FMMU0 Ls:00000000 Ll: 503 Lsb:0 Leb:7 Ps:1100 Psb:0 Ty:02 Act:01
 FMMU1 Ls:000001f8 Ll: 503 Lsb:0 Leb:7 Ps:16e8 Psb:0 Ty:01 Act:01
 FMMUfunc 0:1 1:2 2:3 3:0
 MBX length wr: 128 rd: 128 MBX protocols : 0c
 CoE details: 23 FoE details: 01 EoE details: 00 SoE details: 00
 Ebus current: 0[mA]
 only LRD/LWR:0

Slave:2
 Name:RU
 Output size: 8bits
 Input size: 72bits
 State: 4
 Delay: 700[ns]
 Has DC: 1
 DCParentport:1
 Activeports:1.0.0.0
 Configured address: 1002
 Man: 00000001 ID: 80000000 Rev: 00000013
 SM0 A:1c00 L: 512 F:00010026 Type:1
 SM1 A:1e00 L: 512 F:00010022 Type:2
 SM2 A:1000 L:   1 F:00010024 Type:3
 SM3 A:1600 L:   9 F:00010000 Type:4
 FMMU0 Ls:000001f7 Ll:   1 Lsb:0 Leb:7 Ps:1000 Psb:0 Ty:02 Act:01
 FMMU1 Ls:000003ef Ll:   9 Lsb:0 Leb:7 Ps:1600 Psb:0 Ty:01 Act:01
 FMMUfunc 0:1 1:2 2:3 3:0
 MBX length wr: 512 rd: 512 MBX protocols : 0c
 CoE details: 2b FoE details: 01 EoE details: 00 SoE details: 00
 Ebus current: 0[mA]
 only LRD/LWR:0
End slaveinfo, close socket
End program

RU is the slave with 80Bit process data - roughly 10 Bytes.
AX has roughly 1000 Bytes.
We want to exchange data with RU at 10kHz and AX with 1kHz. Everybody gets its own group.
We have a 4core system IMX8MP, did ethtool Tuning, separate cpu core for realtime process data exchange thread, move irqs of ethernet adapter to non realtime thread cpu cores.

Find the wireshark captures attached. I did them with tcpdump.
captures.tar.gz

@ArthurKetels
Copy link
Contributor

Thanks for the data.
Observations:

  • The grouping works fine and as intended.
  • The example with RU at 10KHz actually only runs at 1KHz.
  • The return packet time (Tx to Rx) for 60byte packet is a whopping 200us.
  • Relative the 1054byte packet has a return time of 275us.

A BOTE calculation gives a transmission time of a 60 byte packet of 6.7us in a two slave set-up. A 1054byte packet takes 86us. So your network stack between SOEM and the NIC eats up 192us. That should be your focus, as your cycle time is only 100us in total.

Optimize your network stack and NIC driver. I guess there is some interrupt moderation going on. Your CPU is more than capable enough. I have run 30KHz on a 144MHz single core MCU reliably.

@tuxembedded
Copy link
Author

Hmm, our observations:

  • We toggle GPIO in our EtherCAT cycle (receive process data - send process data) and an external scope shows roughly 10kHz toggle freq
  • I looked into the packet dump as well and saw too less packets. But how do you call this which makes its way through the EtherCAT cycle? I read about "Ethernet frame", but ifconfig and wireshark dump packets. Packets might be aggregated frames ... ?!?
    • should I see 10k packets per second in ifconfig/ wireshark when there are 10k EtherCAT cycles per second?
  • Yeah, I get this as well. We loose a lot of time in our Ethernet stack
    Thank you so far!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants