You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I have two AGX Xavier Industrial connected through a cable providing PCIe Gen1 speed. I have Yocto Linux Tegra 5.10 Branch Kirkstone / r35.4.1 with JP 5.1.2.
I followed the instructions on page below: https://docs.nvidia.com/jetson/archives/r35.4.1/DeveloperGuide/text/SD/Communications/PcieEndpointMode.html?highlight=endpoint
EP and RC side could write and read using busybox devmem and shared memory.
I also tried ethernet instructions preparing the patch attached for EP. Eth1 is shown on EP side, virtual ethernet PCIe node is also shown successfully on RC side. I set the IP address on eth1 using ifconfig. But neither ping nor iperf3 could communicate.
The other interesting thing is, when eth1 on EP side is up, following message appears on RC side;
root@jetson-agx-xavier-industrial:~# [ 44.036668] IPv6: ADDRCONF(NETDEV_CHANGE): enP5p1s0: link becomes ready
0I also tried to ping and iperf3 using enP5p1s0 ethernet port. But the result was the same.
I also checked following page: https://forums.developer.nvidia.com/t/jetpack5-0-2-xavier-pcie-endpoint-mode-repetition/229723
As depicted instructions in the page, I couldnt find the /sys/kernel/debug/pcie-x/, instead there was /sys/kernel/debug/pcie@141a0000/. On EP side there was no /sys/kernel/debug/tegra_pcie_ep entry/device.
Can you please comment on the statements above?
Can you please provide any procedure to configure communication between nodes using ethernet and using direct DMA messaging between AGX Xavier devices?
Best Regards 0009-eth-over-pcie-for-ep.patch.txt
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I have two AGX Xavier Industrial connected through a cable providing PCIe Gen1 speed. I have Yocto Linux Tegra 5.10 Branch Kirkstone / r35.4.1 with JP 5.1.2.
I followed the instructions on page below:
https://docs.nvidia.com/jetson/archives/r35.4.1/DeveloperGuide/text/SD/Communications/PcieEndpointMode.html?highlight=endpoint
EP and RC side could write and read using busybox devmem and shared memory.
I also tried ethernet instructions preparing the patch attached for EP. Eth1 is shown on EP side, virtual ethernet PCIe node is also shown successfully on RC side. I set the IP address on eth1 using ifconfig. But neither ping nor iperf3 could communicate.
The other interesting thing is, when eth1 on EP side is up, following message appears on RC side;
root@jetson-agx-xavier-industrial:~# [ 44.036668] IPv6: ADDRCONF(NETDEV_CHANGE): enP5p1s0: link becomes ready
0I also tried to ping and iperf3 using enP5p1s0 ethernet port. But the result was the same.
I also checked following page:
https://forums.developer.nvidia.com/t/jetpack5-0-2-xavier-pcie-endpoint-mode-repetition/229723
As depicted instructions in the page, I couldnt find the /sys/kernel/debug/pcie-x/, instead there was /sys/kernel/debug/pcie@141a0000/. On EP side there was no /sys/kernel/debug/tegra_pcie_ep entry/device.
Can you please comment on the statements above?
Can you please provide any procedure to configure communication between nodes using ethernet and using direct DMA messaging between AGX Xavier devices?
Best Regards
0009-eth-over-pcie-for-ep.patch.txt
Beta Was this translation helpful? Give feedback.
All reactions