-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New RPI 256 SSD Kit Not found. brcm-pcie 1000110000.pcie: link down #6511
Comments
Is this a cross-post from the forums, or are you seeing the same symptoms in a different context? |
It was suggested I created this bug as a result of the forum post which could not be resolved. |
Have you tried without whatever the bit of hardware mounted underneath the Pi 5 is? It always helps to reduce a system to the simplest configuration when debugging a problem. |
If you look at the last entry of the post, the pi and the hat are the only two on the system. I did a fresh install of the OS. it should be as reduced as possible. |
It's difficult to say for certain, but in the image attached to https://forums.raspberrypi.com/viewtopic.php?p=2273870#p2273870 it looks like the FFC connector on the M2 HAT isn't closed. There appears to be a gap between the black of the clamp and the silver of the main connector. |
I clamped it down tight and rebooted but no change. I still can't see the drive and the link down error message shows in the log. I have tried fiddling will all the connections etc with no success. |
In that case the common denominator is your Pi 5. I would recommend removing the HAT and active cooler and carefully checking both sides of the board for missing or damaged components. |
The underside of the board is more vulnerable - any chance of an in-focus shot of that? |
When I plug the HAT back and restart I get the link down error message again. Does that indicate that I have a bad Raspberry Pi 5 board? [ 0.000000] Kernel command line: reboot=w coherent_pool=1M 8250.nr_uarts=1 pci=pcie_bus_safe cgroup_disable=memory numa_policy=interleave smsc95xx.macaddr=D8:3A:DD:F7:80:F9 vc_mem.mem_base=0x3fc00000 vc_mem.mem_size=0x40000000 console=ttyAMA10,115200 console=tty1 root=PARTUUID=c71db27b-02 rootfstype=ext4 fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles cfg80211.ieee80211_regdom=US |
Given that the Pi 5, M.2 HAT+ and NVME drive are known good designs, and that you've tried a different M.2 HAT, a hardware fault - Pi 5, NVME or cable - does seem the likeliest explanation. |
"Does that indicate that I have a bad Raspberry Pi 5 board?" Not necessarily. With the default firmware and config.txt settings this tells us that the firmware was able to detect the the presence of the HAT via the DET_WAKE gpio. The previous post of the forum shows the power LED on the HAT illuminated so that shows that the power-en GPIO is working. If the firmware does not detect a HAT then Linux won't attempt to bringup the link unless the dtparam=pciex1 parameters is present. So ... the system knows that there is a HAT there but is unable to initialize the PCIe link which could be the Pi5 / the cable / the HAT or the cable. It might be worth trying vanilla RPi OS on the SD-card but that probably won't make a difference. |
Do you know of there is a log or way to enable some additional logging to get more information on why the system thinks the link is down? The RPI 5 has just sat at a desk and taken little to no wear/abuse. I have now tried two new SSD kits with the same issue, so wondering how to get more information to narrow / identify the issue and if it is with the RPI 5. Would you speculate towards a software or hardware issue? It sounds like you are saying a hardware issue. And, any advice on how I can narrow this down and isolate this issue? Unfortunately I don't have another new RPI 5. |
When you say "SSD kits", does you mean the M.2 HAT, the SSD and the ribbon cable to connect to the Pi 5? If that's the case, the only common element is the Pi 5 and its power supply. |
The SSD kit is this: https://www.raspberrypi.com/products/ssd-kit/ . I have the 256GB version and it has everything needed to to plug it in, except it doesn't work :(. |
I think we can rule out the power supply, given that it's our 27W PSU, so the only thing left is the Pi 5 itself. I can't know how it has been treated - you say it's "taken little to no wear/abuse" - but it doesn't require great carelessness to short something out when the Pi is uncased. |
Ok. So do know if there are logging or other options to get more details about the issue other than just link down? |
Link down is the symptom - the driver configures everything, waits 100ms, then checks for the status bits to indicate that the link has been established, retrying every 5ms up to 100 times. The error shows that there was no response. If there was some other error or indication that something was wrong then the kernel would have displayed it, but there isn't - it's supposed to just work. |
If this is the hardware fault that we think it might be, this is the first such instance we're aware of. As such, we would like to inspect the board. Email me - [email protected] - and we can arrange an exchange. |
Describe the bug
New Raspberry PI 256GB SSD kit is not recognized by the Raspberry Pi 5 and will not see the new HAT or nvme drive.
The two primary ways to try and "see" the m.2 HAT and nvme drive has been by using lspci and lsblk. Neither seem to see it. All attempts and seating any wire connections or drive connections have not helped.
After assuming the problem was with the RPi SSD kit, I got a new replacement kit and it too is not seen by the system.
The only error I see is system log is brcm-pcie 1000110000.pcie: link down.
You can see the output of sudo journalctl -b -g pci here: https://paste.debian.net/hidden/95dd02d6
Steps to reproduce the behaviour
There is a long thread in the RPI forum troubleshooting this issue: https://forums.raspberrypi.com/viewtopic.php?t=379680 without success.
Device (s)
Raspberry Pi 5
System
cat /etc/rpi-issue
Raspberry Pi reference 2024-11-19
Generated using pi-gen, https://github.com/RPi-Distro/pi-gen, 891df1e21ed2b6099a2e6a13e26c91dea44b34d4, stage4
vcgencmd version
2024/11/12 16:10:44
Copyright (c) 2012 Broadcom
version 4b019946 (release) (embedded)
uname -a
Linux haleakala 6.6.62+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.6.62-1+rpt1 (2024-11-25) aarch64 GNU/Linux
Logs
Dec 02 19:47:46 haleakala kernel: Kernel command line: reboot=w coherent_pool=1M 8250.nr_uarts=1 pci=pcie_bus_safe cgroup_disable=memory numa_policy=interleave smsc95xx.macaddr=D8:3A:DD:F7:80:F9 vc_mem.mem_base=0x3fc00000 vc_mem.mem_size=0x40000000 console=ttyAMA10,115200 console=tty1 root=PARTUUID=c71db27b-02 rootfstype=ext4 fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles cfg80211.ieee80211_regdom=US
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000110000.pcie: host bridge /axi/pcie@110000 ranges:
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000110000.pcie: No bus range found for /axi/pcie@110000, using [bus 00-ff]
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000110000.pcie: MEM 0x1b80000000..0x1bffffffff -> 0x0080000000
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000110000.pcie: MEM 0x1800000000..0x1b7fffffff -> 0x0400000000
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000110000.pcie: IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000110000.pcie: Forcing gen 2
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000110000.pcie: PCI host bridge to bus 0000:00
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000110000.pcie: link down
Dec 02 19:47:46 haleakala kernel: pcieport 0000:00:00.0: PME: Signaling with IRQ 38
Dec 02 19:47:46 haleakala kernel: pcieport 0000:00:00.0: AER: enabled with IRQ 38
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: host bridge /axi/pcie@120000 ranges:
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: No bus range found for /axi/pcie@120000, using [bus 00-ff]
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: MEM 0x1f00000000..0x1ffffffffb -> 0x0000000000
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: MEM 0x1c00000000..0x1effffffff -> 0x0400000000
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: IB MEM 0x1f00000000..0x1f003fffff -> 0x0000000000
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: Forcing gen 2
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: PCI host bridge to bus 0000:00
Dec 02 19:47:46 haleakala kernel: brcm-pcie 1000120000.pcie: link up, 5.0 GT/s PCIe x4 (!SSC)
Additional context
Any suggestions for how to troubleshoot / resolve this issue would be greatly appreciated.
The text was updated successfully, but these errors were encountered: