Build environment - Ubuntu (WSL)

Working: Steps for initial setup, clone, CMake config, and command-line build (with PICO_SDK_FETCH_FROM_GIT).

Nota Bene: Avoid the Rapsberry Pi Pico VSCode extension.

Working


Install pre-requisites

  • Update apt: sudo apt update
  • Upgrade installed libs: sudo apt upgrade
  • Install git: sudo apt install git
  • Install CMake: sudo apt install cmake
  • Install gcc-arm: sudo apt install gcc-arm-none-eabi libnewlib-arm-none-eabi libstdc++-arm-none-eabi-newlib
sudo apt update
sudo apt upgrade
sudo apt install git cmake gcc-arm-none-eabi libnewlib-arm-none-eabi libstdc++-arm-none-eabi-newlib

Initial Clone

Note that this is from my fork… if you’re not making your own branches / PRs, modify the git clone to use the upstream URL, and skip the git remote add command.

cd ~/
git clone https://github.com/henrygab/BusPirate5-firmware bp5
cd ~/bp5
git remote add upstream https://github.com/DangerousPrototypes/BusPirate5-firmware
git pull --all

Initial CMake Configuration

Here, I set this to pull the SDK directly from GIT. Thus, everything remains local to each enlistment.

mkdir ~/bp5/build
cd ~/bp5/build
PICO_SDK_FETCH_FROM_GIT=1 cmake ..

Build From Command Line

cd ~/bp5/build
make clean
make all

Build from VSCode



Manual cleanup of `Raspberry Pi Pico` VSCode Extension

Having this extension installed, even when it’s fully disabled, causes no end of problems. Worse, the extension refuses to uninstall. The following will remove this extension:

  1. In VSCode, in the add-ons, select the Raspberry Pi Pico extension, and choose to disable it (globally). (This is selected only because the uninstall option does not work.)
  2. Shut down all instances of VSCode.
  3. In Windows, goto %USERPROFILE%\.vscode\extensions\. For example, this may be C:\users\john\.vscode\extensions\
  4. delete any subdirectory for the extension. e.g., raspberry-pi.raspberry-pi-pico-0.15.1, raspberry-pi.raspberry-pi-pico-0.15.2, …
  5. In WSL, goto ~/.vscode-server/extensions/
  6. Just like in Windows, delete any subdirectory for the exension.
  7. Load VSCode … it will notice that the extensions were modified offline, and indicate you should reload the window.
  8. Shutdown all VSCode windows
  9. Finally, start VSCode again and the extension should be gone.

Which add-ons needed for VSCode?

… To be added …

Now the building works, the only remaining thing to document is what VSCode extensions should be loaded. Here’s a (partial) list:

… To be added …

2 Likes

Same here. However I’m not sure how else to get picotool running under windows. Maybe there is a compiled version now.

Under WSL, the last time I had to rebuild everything, cmake took care of most of it. I just had to check out pico tool and the sdk and then cmake … in the Bus Pirate build folder.

This, from the pico github page, is basically what I did:

Install CMake (at least version 3.13), and a GCC cross compiler

sudo apt install cmake gcc-arm-none-eabi libnewlib-arm-none-eabi libstdc++-arm-none-eabi-newlib

Set up your project to point to use the Raspberry Pi Pico SDK

  • Either by cloning the SDK locally (most common) :
    1. git clone this Raspberry Pi Pico SDK repository
    2. Copy pico_sdk_import.cmake from the SDK into your project directory
    3. Set PICO_SDK_PATH to the SDK location in your environment, or pass it (-DPICO_SDK_PATH=) to cmake later.
    4. Setup a CMakeLists.txt like:
1 Like

UPDATE:
It was the Raspberry Pi Pico extension. I don’t know how, or why, but I finally uninstalled it by manually deleting all traces from both Windows (%USERPROFILE%\.vscode\extensions\) and WSL (~/.vscode-server/extensions/).

Now the CMake extension is happily showing the targets, and building from the status bar “just works” again.

I’ll update my final steps above.

So something is still failing related to the picotool build on windows. It looks like a seg fault – can anyone get me the output actually dying there as opposed to the --quiet option that’s being passed?

If you tell me how, I will do it :slight_smile:

I clicked yes when VScode offered to bring up the container. Build tools descended like magic.

After that though, I couldn’t figure out how to actually build anything.

If you don’t get connected, you should be able to type ctrl+shift+p for the command pallet. In there, there should be an option to connect. It should the be the same as opening a new terminal. You’ll find yourself at a /project folder, which is mounted to your git repo. At that point it’s the standard mkdir -p build; cd build; cmake ../; make and you should be off to the races, even in windows.

My above comment though is actually related to the Windows build server on the Github. It seems to be failing because of (maybe) the length of arguments executed on the command line, which I find to be dumb, but that’s as best as I can understand for why it would report that it “worked” but then “failed” in the same build context.

I really like the idea of using a devcontainer.

I took a quick look, and the docker-compose.yml includes:

network_mode: host
privileged: true

As a quick first pass at getting a devcontainer working, where you created the container, this makes sense.

Is this following best practices?

I ask because, long ago, I was taught that privileged: true was only for very specific use cases, such as nested docker, and that individual capabilities should be added as needed. This seems to still be true: docker - Privileged containers and capabilities - Stack Overflow

You’re correct that it’s supposed to be used with care. I will never profess to be a docker pro but I did this with intent because I shamelessly didn’t want to fight getting specific USB device mappings on Linux distributions as well as the other comment here
General | Docker Docs

If you have settings you think are compatible I’m happy to try them but those were the hurdles I hit when I started and figured the risk level was low for the 90% of people who just want to spin up and throw away a container for building.

1 Like

Showing my ignorance:

Is that devcontainer configuration intended to work on Windows host?

i.e., If I install docker, can I both dev and debug directly from the devcontainer?

Today, I have the RPi Probe and the BP5 both tunneled to WSL2 using USBIPD… and I haven’t been successful at getting VSCode configured properly for debugging.

If so, debugging capability is strongly interesting. However, I will have to disable the privileged: true setting, as it seems a gaping security hole…

I don’t have the probe so I haven’t tried it. It does work to build on windows, at least it did on Windows 11 for me.

You should be able to remove the privileged part, but I don’t think it’s going to change much to help you fix this based on the link above. You should definitely remove it if you’re going to spin it as a build service.

Do you have a set of steps to try and reproduce your Windows debugging?

WSL2 steps continued (as I cannot edit the initial post anymore):

Debugger from cmd-line

Used the following excellent post here to get myself setup to debug:

From command line, I can get it to work with two commands (in two shells):

openocd                         \
    -f interface/cmsis-dap.cfg  \
    -f target/rp2040.cfg        \
    -c "adapter speed 5000"
gdb-multiarch bus_pirate5_rev10.elf      \
    --ex "target remote localhost:3333"  \
    --ex "monitor reset init"            \
    --ex "continue"                      \
    --ex "set timeout unlimited"

I’m getting closer. I can get VSCode to build, upload, and appear to be debugging by adding the following contents as file ./.vscode/launch.json:

click to expand

{
	"version": "0.2.0",
	"configurations": [
		{
			"name": "BP5_Rev10",
			"type": "cppdbg",
			"request": "launch",
			"cwd": "${workspaceFolder}",
			"program": "${workspaceRoot}/build/bus_pirate5_rev10.elf",
			"MIMode": "gdb",
			"miDebuggerPath": "gdb-multiarch",
			"miDebuggerServerAddress": "localhost:3333",
			"useExtendedRemote": true,
			"postRemoteConnectCommands": [
                //{"text": "target remote localhost:3333"},
				{"text": "monitor reset init"},
				{"text": "load"}
			]
		}
	]
}


Only one change made above: set "useExtendedRemote" to true instead of false. I have gotten some breakpoints to work. Crossing fingers that this stays working…

Because Windows :crazy_face:

Thanks for the steps. I don’t have an RPi debug probe … sounds dumb, but couldn’t I debug a BP5 with a BP5? The other option I have is a Tigard.

If either of those are something real, I’m happy to try it, but I don’t generally live in Windows.

Regarding the devcontainer, could you clarify if you’re using a bind mount point to access the USB devices from the dev container?

In linux, I think it would look like: "mounts": ["type=bind,source=/dev/bus/usb,target=/dev/bus/usb"] in the compose file (assuming you’re building from scratch).

In Windows … I keep getting sent down this same path of sadness about how devcontainers don’t have USB access: How to use a host USB device in a container in Docker Desktop? - Docker Desktop for Windows - Docker Community Forums

I think I'll stick with WSL2 w/USBIPD for now....

Yeah, the Windows Hyper-V doesn’t have USB pass-through support. Seems like this means docker doesn’t have it either on Windows, because it relies on the Hyper-V layer for such things. That article recommends using USBIPD to pass the device through.

I’m actually using USBIPD already to pass the USBIPD device through to WSL2. That article suggests, since USBIPD is just passing the URBs through a network (TCP/IP) layer, that the docker container could be setup to allow network communications to allow the USBIPD protocol with the host.

This would then provide the container direct (without WSL2 as an intermediate layer, but still through USBIPD) access to the USB devices (BusPirate, debugprobe).

Compared to using a unique instance of WSL2 just for developement, it sounds like using docker would add at least one additional layer of complexity (negative), for the benefit of a known-good toolset (positive).

I think I should stick with WSL2 w/USBIPD for now.


I think I got debug working. Is there an existing way to add printf-like statements that actually output text to the debugger?

1 Like

Well, the RPi debug probe is $12. The Tigard is $49. I’m sure the Tigard would work as well. It’s a nice device.

1 Like

I don’t need more hardware (e.g., I have a Segger J-Link and other probes). The hardware is not the problem here. Swapping one working piece of hardware for a second piece of working hardware does not get closer to the goal. Moreover, more folks would have a debugprobe or picoprobe.

The target here is to get the existing, low-cost hardware (which is fit for the purpose) to work within the constraints of using WSL2 as a build and debug platform. This is therefore documenting all the hoops needed to get the software setup.

I understand - my comment was more focused on if I could recreate the same challenges with the hardware I have on hand. Based on the above, it still seems like all the devices operate via USB / Serial for interactions into WSL2.

Would you like me to pull together a version of the container that has removed the privelaged as well as includes copies of the debugging tools mentioned? OpenOCD and gdb-multiarch are definitely in the repos.

I updated the devcontainers version to now include these tools if that’s helpful. I’ve also tried to address some permissions issues. I’ll have my Windows system today so hopefully I’ll get a chance to see if I can try out the same pass through approach you listed.

1 Like

My dev box is not available for a couple more days.

I appreciate your work here, and intend to take a look later this week!

I saw you merged the updated container - any better?

1 Like

I haven’t yet. Real life interferes at the most inopportune times. :laughing:
I figured you had done basic testing, and the change would not make things worse, so it was either nuetral or positive, and gave the benefit of the doubt.

I still want to try this. It’s unfortunate that it’s not simpler to assign USB hardware to specific VMs / containers with HyperV.