Submitted by admin on Wed, 08/03/2022 - 15:49

Much better

1. A word on the toolchain

The toolchain is the heart of the development process. It is a set of software tools that include an assembler, a Rust compiler, a linker, a debugger and some other utilities to build executable (i.e. flash-able) files.

The Rust toolchain, based on the rustc compiler and the dependendy manager (and build tool) cargo, can be debugged using GDB or LLDB.

There is no general agreement on the best IDE to use for Rust development, nor a recommended IDE by the Rust Team. But, according to a 2021 JetBrains study, the most used IDEs are: VSCode (40%), CLion (24%), IntelliJ IDEA (19%), Vim (8%), Sublime Text (2%), Emacs (2%) and others (5%)...

The following parts of this tutorial will use VSCode, but the development process can be (more or less easily) ported to any IDE you choose to use.

A modern IDE provides several useful features such as an integrated code editor, build configuration management, buttons you can just click to fire complex actions such as the build process, comfortable debugging interface, navigation in large projects, code completion, symbol indexation...

Do we really need this comfort? Well Mr Robot sure doesn't.

Basically, a simple text editor, a toolchain, and a way to flash the executable image is all you really need to get some code running. Ready to try?

2. Prepare the toolchain

You can download and install the Rust Toolchain following the instructions indicated in rust-lang.org:

These instructions are for Linux systems, if you want to use Windows, download the rustup-init.exe file from this link.

2.1. Compiling tools

Time to open a terminal window (Ctrl + Alt + T).

Verify that the curl tool is installed on your system:

sudo apt-get install curl

You can now install the Rust toolchain by running:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

To verify if the Rust commands are accessible, you can check the version using this command:

$ rustup -V
rustup 1.24.3 (ce5817a94 2021-05-31)

For bandwidth and disk usage concerns the default installation only supports native compilation (compiling for your own system). To add cross compilation support for the ARM Cortex-M architectures (which STM32 designs are based on) choose one of the following compilation targets. For the STM32F072RB board used for the examples in these tutorials, use the thumbv6m-none-eabi target:

rustup target add thumbv6m-none-eabi

Some additional tools are necessary for inspecting binaries generated by the LLVM backend (backend of rustc compiler):

cargo install cargo-binutils

rustup component add llvm-tools-preview

2.2. Debugging tools

We will now install the necessary tools to debug ARM Cortex-M programs:

For Ubuntu 18.04 (Debian stretch) or newer:

sudo apt install gdb-multiarch openocd

For Ubuntu 18.04 (Debian stretch) or newer:

sudo apt install gdb-arm-none-eabi openocd

What are the tools we just installed?

  • GDB is the debugger we will use during this tutorial to run the code step-by-step (you can also use LLDB if you are more familiar with it...).
  • OpenOCD (Open On-Chip Debugger): Because GDB is not able to communicate with the ST-Link debugging hardware directly, it needs a translator. OpenOCD will be our translator (our communication channel between GDB and the ST-Link on the board).

But for OpenOCD to have access to our ST-Link hardware through the USB port of our system without root privileges, we need to add an "udev" rule.

Create the file /etc/udev/rules.d/70-st-link.rules, with the contents shown below:

# NUCLEO STM32F072RB - ST-LINK/V2
ATTRS{idVendor}=="0483", ATTRS{idProduct}=="3748", TAG+="uaccess"

# NUCLEO STM32F072RB - ST-LINK/V2.1
ATTRS{idVendor}=="0483", ATTRS{idProduct}=="374b", TAG+="uaccess"

Then reload all the udev rules with:

sudo udevadm control --reload-rules

If you had the board plugged to your laptop, unplug it and then plug it again.

You can check the permissions by running this command:

$ lsusb
(..)
Bus 001 Device 011: ID 0483:374b STMicroelectronics ST-LINK/V2.1
(..)

Take note of the bus and device numbers. Use those numbers to create a path like /dev/bus/usb/<bus>/<device>. Then use this path like so:

$ ls -l /dev/bus/usb/001/011
crw-------+ 1 root root 189, 17 Sep 13 12:34 /dev/bus/usb/001/018

The + appended to permissions indicates the existence of an extended permission. The getfacl command tells the user you can make use of this device:

$ getfacl /dev/bus/usb/001/011 | grep user
user::rw-
user:you:rw-

OpenOCD can now access the ST-Link device!

2.3. Verify the installation

In this section we will check that some of the required tools / drivers have been correctly installed and configured.

Time to connect the board to the computer (if it's not done already)...

To start OpenOCD up, run this command:

openocd -f interface/stlink-v2-1.cfg -f target/stm32f0x.cfg

You should get an output on the console a little like this one:

Open On-Chip Debugger 0.10.0
Licensed under GNU GPL v2
For bug reports, read
 http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
none separate
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v38 API v2 SWIM v27 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 3.248915
Info : stm32f0x.cpu: hardware has 4 breakpoints, 2 watchpoints

3. Let's code

3.1. Hello, world

Create a new hello folder.

To initialize a Rust project, we will use the following command:

cargo init hello

This command created multiple files and folders in our hello directory. To see them, we can use the tree package (if not installed: sudo apt-get install tree or just use ls):

$ tree -al -L 2 hello

hello
├── Cargo.toml
├── .git
│   ├── config
│   ├── description
│   ├── HEAD
│   ├── hooks
│   ├── info
│   ├── objects
│   └── refs
├── .gitignore
└── src
    └── main.rs

What are these new files and folders?

  • Cargo.toml is the manifest of the project. It is written in TOML format and lists the features, packages, targets and dependencies of the project:

    [package]
    name = "hello"
    version = "0.1.0"
    edition = "2021"
    
    # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
    
    [dependencies]
    
  • .git folder is used to contains all information that is necessary for the project to be used as a Git repository and all information relating commits, remote repository address, etc.

  • .gitignore is a plain text file where each line contains a pattern for files/directories to ignore in the Git repository.

  • src/main.rs is the file containing our main Rust program. It only contains a "Hello, world!" program:

    fn main() {
        println!("Hello, world!");
    }
    

We have our first Rust project! We can compile the project using this command:

$ cargo build

   Compiling hello v0.1.0 (/home/<user>/<path>/hello)
    Finished dev [unoptimized + debuginfo] target(s) in 0.43s

The compilation produces a target folder in our project, which contains all our build files, including the executable which we can run using this command:

$ cargo run

    Finished dev [unoptimized + debuginfo] target(s) in 0.00s
     Running `target/debug/hello`
Hello, world!

We can also run the executable directly:

$ ./target/debug/hello

Hello, world!

3.2. Building for STM32

This project is only a Rust project for now (unrelated to our STM32F072RB board) and the compilation defaults to your computer's architecture as the target to build for. Our goal now is to build the program specifically for our target, which is our STM32F072RB board.

First, we have to make a few changes to our manifest, Cargo.toml, by adding three dependencies:

[dependencies]
cortex-m-rt = "0.6.10"
panic-halt = "0.2.0"
cortex-m-semihosting = "0.3.3"
stm32f0 = {version = "0.14.0", features = ["stm32f0x2", "rt"]}

Here are the descriptions of these dependencies (also called crates):

  • cortex-m-rt: "Startup code and minimal runtime for Cortex-M microcontrollers"
  • panic-halt: "Set the panicking behavior to halt"
  • cortex-m-semihosting: "Semihosting for ARM Cortex-M processors"
  • stm32f0: "Device support crates for STM32F0 devices"

This will make more sense if we look at how we are going to use these packages. Change the main program (src/main.rs) to this:

#![no_std]
#![no_main]

use panic_halt as _;
use stm32f0 as _;

use cortex_m_rt::entry;
use cortex_m_semihosting::hprintln;

#[entry]
fn main() -> ! {
    hprintln!("Hello, world!").unwrap();

    loop {}
}

Let's take a closer look to this program:

  • #![no_std] indicates that the program will not link to the standard library, std. Instead it will link to its subset: the core crate.

    As we want to build for a bare metal environment (as opposed to hosted environments like Linux/Windows which have a system interface), we cannot load the standard library (libstd), which requires some sort of system integration. Instead we will use a subset of this library, called libcore.

  • #![no_main] indicates that this program won't use the standard main interface that most Rust programs use.

  • use panic_halt as _; provides a panic_handler that defines the behavior to follow when the program is panicking. In this case, a panic causes the program, or the current thread, to halt by entering an infinite loop.

  • use stm32f0 as _; provides the interrupt vectors needed for panic_halt to use.

  • #[entry] is an attribute provided by the cortex-m-rt crate that's used to indicate the entry point of our program.

  • fn main() -> ! is our main function, which will be the only process running on our target. Which means we don't want it to end! Ever! The use of a divergent function (-> !) ensures at compile time that it will be the case.

  • hprintln! is a macro for printing to the HOST (our target board) standard output, with a newline.

  • loop{} is an infinite loop... You are probably used to see this if you are familiar with embedded programming.

If you have followed the installation part in details, you already added the Cortex M0 as a target for the Rust toolchain:

$ rustup target add thumbv6m-none-eabi

info: component 'rust-std' for target 'thumbv6m-none-eabi' is up to date

We can now compile the code by running:

$ cargo build --target thumbv6m-none-eabi

   Compiling nb v1.0.0
   Compiling void v1.0.2
   Compiling vcell v0.1.3
   Compiling bitfield v0.13.2
   Compiling panic-halt v0.2.0
   Compiling cortex-m v0.7.4
   Compiling cortex-m-rt v0.7.1
   Compiling bare-metal v0.2.5
   Compiling cortex-m-semihosting v0.3.7
   Compiling volatile-register v0.2.1
   Compiling nb v0.1.3
   Compiling embedded-hal v0.2.7
   Compiling hello v0.1.0 (/home/<user>/<path>/hello)
    Finished dev [unoptimized + debuginfo] target(s) in 1.07s

We can make this target the default and configure the linker by adding the following lines in a .cargo/config.toml file we create:

[target.'cfg(all(target_arch = "arm", target_os = "none"))']

rustflags = [
  # This is needed if your flash or ram addresses are not aligned to 0x10000 in memory.x
  # See https://github.com/rust-embedded/cortex-m-quickstart/pull/95
  "-C", "link-arg=--nmagic",

  # LLD (shipped with the Rust toolchain) is used as the default linker
  "-C", "link-arg=-Tlink.x",

  # if you run into problems with LLD switch to the GNU linker by commenting out
  # this line
  # "-C", "linker=arm-none-eabi-ld",
]

[build]
target = "thumbv6m-none-eabi"    # Cortex-M0 and Cortex-M0+

We can now build the program just by running:

$ cargo build

    Finished dev [unoptimized + debuginfo] target(s) in 0.00s

Now, you might think: "It builds so it must work, right? Right?" To that I would answer: "Well it could work, but actually no..."

We need to make one last change for this code to be flashable on our target. And that is to enter the memory region information into a memory.x file:

/* Linker script for the STM32F072RB-Nucleo */
MEMORY
{
  FLASH (rx)     : ORIGIN = 0x08000000, LENGTH = 128K
  RAM (xrw)      : ORIGIN = 0x20000000, LENGTH = 16K
}

NOTE: If you for some reason changed the memory.x file after you had made the first build of a specific build target, then do cargo clean before cargo build, because cargo build may not track updates of memory.x.

Now we build for the last time before we start flashing:

cargo build

3.3. Debugging

As mentionned in the tool setup part, the debugging process involves two different tools: OpenOCD and GDB.

If you remember well, to start OpenOCD, we use the following command:

openocd -f interface/stlink-v2-1.cfg -f target/stm32f0x.cfg

However, we can simplify this call by configuring OpenOCD to use the ST-Link v2.1 and the target STM32F0x by default. Create the file openocd.cfg, and fill it with the following content:

# Sample OpenOCD configuration for the STM32F072RB-Nucleo development board

source [find interface/stlink-v2-1.cfg]
source [find target/stm32f0x.cfg]

We can now start OpenOCD in a first terminal by running:

$ openocd

Open On-Chip Debugger 0.10.0
Licensed under GNU GPL v2
For bug reports, read
 http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
none separate
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v38 API v2 SWIM v27 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 3.248915
Info : stm32f0x.cpu: hardware has 4 breakpoints, 2 watchpoints

Start GDB in a second terminal:

$ gdb-multiarch -q target/thumbv6m-none-eabi/debug/hello

Reading symbols from target/thumbv6m-none-eabi/debug/hello...
(gdb)

To connect GDB to OpenOCD, connect to TCP port 3333:

(gdb) target remote :3333

Remote debugging using :3333
0x00000000 in ?? ()

We can now flash the code to the STM32 using the load command:

(gdb) load

Loading section .vector_table, size 0xc0 lma 0x8000000
Loading section .text, size 0xc28 lma 0x80000c0
Loading section .rodata, size 0x31c lma 0x8000cf0
Start address 0x080000c0, load size 4100
Transfer rate: 14 KB/sec, 1366 bytes/write.

The program is now loaded! But as we use semihosting in our program, we have to tell OpenOCD to enable semihosting. You can send commands to OpenOCD using the monitor command:

(gdb) monitor arm semihosting enable

The following line should appear on both GDB and OpenOCD console:

semihosting is enabled

We can now debug the code! Let's start by placing a breakpoint on the main function:

(gdb) break main

Breakpoint 1 at 0x8000148: file src/main.rs, line 9.

The breakpoint is set, we can now continue the execution until we stop at it:

(gdb) continue

Continuing.
Note: automatically using hardware breakpoints for read-only addresses.

Breakpoint 1, main () at src/main.rs:9
9  #[entry]

We stopped at the breakpoint! Which means: if we execute the following line, something should be printed on the OpenOCD console. Let's execute the next line by running:

(gdb) next

This should appear on the OpenOCD console:

Hello, world!

Victory! You compiled code, flashed it on the board and executed it!

We can simplify this procedure quite a lot by creating adding the following lines to the start of our .cargo/config.toml file:

[target.thumbv6m-none-eabi]
runner = "gdb-multiarch -q -x openocd.gdb"

And creating a file named openocd.gdb in the parent directory, adding the following content:

target remote :3333
load
monitor arm semihosting enable
break main
continue

This will enable us to just run cargo run to: start GDB, connect to the target remote, load the code, enable semihosting, putting a breakpoint at the main function and getting to this breakpoint. Of course, you'll need to have opened a OpenOCD connection before running this...