I have two PolarFire SoC Icicle Kit boards and I’m trying to get some network cards to work in the PCI-Express slot.
With an original image and older HSS, the PCI-Express is enumerated and the card (SolarFlare) is detected, but because the kernel isn’t modular and I haven’t found the kernel sources to rebuild the kernel with the SFC driver, I can’t bring up the NIC.
With the latest 22.04 Ubuntu Server image built for the Icicle, I have to use the latest 2022.10 HSS (even though the comments imply this is not advised for ‘everyone’) or else the image doesn’t boot. But once it boots, and it has the needed driver, it doesn’t enumerate any of the PCI-Express devices (microchip-pcie doesn’t show up in dmesg output).
I tried two different cards which both work (enumerate) in the old kernel/HSS combo and neither works on the new kernel/HSS
I am willing to go with a different distro to get a configurable kernel with package management and modular kernel. Are there any recommendations or successful configurations that people have getting PCI-Express to work? (I see posts trying to get GPU’s working and WiFi with limited success and lots of challenges)
Hey @ppokorny-penguin,
this unfortunately is a known issue, PCIe is disabled in the dts for the ubuntu image. This will be updated in a future release.
To give you at least a bit of an explanation:
As you mentioned yourself we’ve had issues with PCIe, we’ve done a significant amount of work to try and improve both the performance and the stability. One of the main issues was that we moved to a >32bit memory addressing set up which caused issues with PCIe cards that only support 32 bit address generation. This is why some cards were working and some weren’t. We’ve now fixed this issue.
These fixes are in the process of being released on our GitHub org and are being up streamed. One of those changes is overlaying our DDR, where by all DDR regions point to the same physical memory address and this needed work under the hood to get going - we weren’t fully ready to release it in October (.10) but needed to provide a candidate for Ubuntu so we pre-released some of the changes to the memory configuration in the design.
This is why the .10 release is marked as “not for all” as it has a different memory map to the previous releases and isn’t compatible, it was basically just for Ubuntu. As all of our work for PCIe wasn’t complete in October we disabled PCIe but left the memory changes in, this meant that when we had all of our PCIe changes up streamed there wouldn’t be a design change needed to get it working, just a Linux update.
We are now at the point where we have moved our own Linux images to use the .10 configuration - you should also be able to use the 2023.02 set up once its released. I cannot give you a date for when Ubuntu will have PCIe enabled again but you could use our next release on Yocto or Buildroot External to build the image you need.
Could you post links to Icicle specific information on Yocto and “Buildroot External” so I can try those images with the 2022.10 HSS.
For example, should I expect that if I downgrade HSS to 2022.09 and use the images tagged v2022.11 of yocto-bsp found here:
that PCI-Express will work?
Or should I be waiting for you to post yocto-bsp v2023.02 to go with HSS 2022.10?
Yep. PCI-Express works (device enumeration) with that yocto-bsp pre-built image. But no easy way to compile the needed device driver as a module that I can see.
The way it works (broadly speaking) is that you should use tags together, e.g 2023.02 reference design with 2023.02 HSS and 2023.02 Linux builds. If something moves ahead (for example there is a 2023.05 Linux release but no reference design release) things will be backwards compatible unless stated. Which driver are you trying to add? I’m a bit confused as it shouldn’t be too hard to add an additional driver to the build
By the way with the releases that have just happened (2023.02) everything is now “in sync” whereby you can use the 2023.02 reference design to boot Ubuntu or run our Linux images so there is no more divergence 
The drivers I’m looking to try are sfc
for the Solarflare card and mlx4_core
and mlx4_en
for the Mellanox card
And by “easy to compile” I mean like the kernel source tree is in the image or the header files are in /usr/src/kernel for an out-of-tree build. Missing gzip and gunzip for manipulating the /proc/config.gz
current kernel config was also a stumbling block.
I expect that if I go the full effort of setting up Yocto on a build host and completely rebuilding the image, I will be ok, but that’s a lot of work for something that should be as simple as copying a couple files, creating a simple makefile and doing an out-of-tree modular driver build.