I mean when already making a new case for new buttons etc, the alternative to integrated 2/4 gbit flash would be to add a micro-sd slot instead.
You can’t beat price / performance of micro-sds.
Would add gigabytes of very fast storage, could be used for dumping long logic analyzer traces and similar where currently usb speed is a limit
Would support using the buspirate standalone without pc by loading scripts and binary data to flash etc. from the micro-sd.
Downside is that there are tons of bad micro-sds on the market. Even from formerly reputable brands. And they change what dies they put into one product number like other people change their pants. So you could get some bug reports that actually come from bad micro sds and not from issues in the bp.
So i’m not fully convinced this is a good idea. Just want to float it.
@henrygab got the 2gbit chip going a few months back. There’s a little nastiness when crossing between the two “planes” but overall it seems workable.
At the time we also just bought the 2gbit from SZLCSC and sent some boards up for rework. Recently we asked our 1Gbit supplier about the 2Gbit chip and they said they couldn’t get it. Will keep looking.
The Winbond part is interesting, it has EEC like the micron part we use. The NAND is kind of a black box to me because the dhara wear leveling and error detection library is pretty intense stuff. I don’t know without really looking closely if a part is usable and how to integrate it.
The SD card chip is interesting and I’ve looked at several of them. They have wear leveling and bad block detection/marking built in which is huge. The thing about SD cards (and presumably those chips): the SD card association holds a range of patents, including they claim, a patent on the SD card 0x13 command (could be wrong, something like that). It’s a few thousand a year for a license, plus a bunch of compliance obligations. Very early Bus Pirate 5s did have a microSD card socket, but when I learned of all this nonsense I switched to the NAND flash.
It is literally the stupidest thing I’ve come across doing open hardware. MMC is an open specification. SD card is that, but the init command is 0x13 instead of 0x01 (and then some multiple pin stuff that is a big improvement over MMC). That’s the innovation, and apparently a trade secret, as nobody could ever test all 255 possible init codes.*
The other thing about SD cards is that most top out around 20mhz, so with rp2040 we could only run the internal bus at 16mhz or the card becomes confused and un-initialized (even when not selected).
*Just my cynical take on the SD card situation, missing lots of nuance. The patents should expire soon/eventually. I asked EFF if they had someone to advise on that, but they weren’t interested.
It’s amazing isn’t it. The broad consensus when I talked to other hardware folks was to do it anyways, they’re not actually going after anyone. But, I can’t sleep well when I’m aware of that kind of issue, and I don’t want to put that kind of stuff in an open source project. Plus, I got to add three more RGB LEDs which looks more balanced and pushes it a notch up on the gaudiness scale.
In my first attempt (where I had a wrong mental model), I had nasty hacks for swapping planes. However, in the current code, I wasn’t aware of any remaining nastiness. Can you help me understand what nastiness you’re seeing?
P.S. – the dhara library is mostly a black box to me as well.
This is another reason to avoid SD card hardware … I’ve heard many horror stories. It’s too easy to reprogram SD cards to report any desired size … even when the real size is much smaller. Even when the person bought a respected name brand SD card from a respected retailer, the SD card still might not be authentic. Then, the question of speed … the rated values are only valid for fresh, unused SD cards, and only when writing sequentially, and only when using exFAT(1) directly. Add dhara or other intermediate layer and all bets are off…
(1) expand...
(1) exFAT has special support for files that are contiguous (NoFatChain). In combination with the ability to pre-allocate clusters (via the allocation bitmap + Stream Extension directory entry’s DataLength field). As a result, writing data on exFAT avoids the need to constantly update the FATs for such files. Writing to that one file then becomes a sequential streaming write. When done, only the ValidDataLength in the Stream Extension directory entry for that file needs to be updated … not the DataLength field. (Although one could free allocated-but-unused clusters, if not extending the file later).
P.S. - I may be skipping important details. If anything above doesn’t make sense, just ask … I am happy to fill in background, or fill in stuff I have in my head that didn’t make it onto the screen. I have rather deep knowledge of the FAT family of file systems.
Sorry, didn’t mean to infer nastiness in your code. I was referring to the need to externally copy from one buffer to the next when swapping planes. The chip.doesnt seem to have a mechanism to do that internally.
(no offense taken … just wondering if there was a problem needing fixing in the code)
Ah… the need to special-case for multi-plane in spi_nand_page_copy().
Yes… it appears that dhara “knows” about the potential optimization when copying a NAND page, and calls a unique API for this optimization (vs. calling read() then write() on its own). I was also surprised that there was no command to do cross-plane cache transfer. Then again, maybe that command was only added in a later multi-plane specification … after all, this part seems to be about 10 years old, so maybe the specifications didn’t have such a command? Maybe there’s a secret vendor-unique command to do so?
Regardless, at worst, this adds some traffic on the SPI bus that otherwise would be optimized out.
Is this causing any noticable drop in performance, or a performance hiccup? If so, and you know details of a vendor-unique command that could be used, I’d be happy to add that?
I doubt it gets hit frequently enough to have a performance impact. The real bottleneck is the USB connection. At present, writing from chip to card is plenty snappy, and would be better optimized with DMA (eg the flash command), than trying to get rid of the occasional plane buffer swap.
Yellow trace is pin with 4.7K pull-down, blue is trace without pull-down fix.
The PIO is used to get a simultaneous output on two pins, one is “fixed” the other is not. This is to check that the garbage 4.7K pull-down isn’t mangling the signal output. This is command bug qe9 in the (not yet pushed) firmware.
The little PIO program would have to be re-written to go above 3MHz, so let’s just fire up the PWM at ~12MHz. The signal are not synced, but we can see that they are approximately the same.
This is similar to what someone posted on Element14. With the heavy pull-down the signal gets a bit wonky presumably because of the current leakage.
Remember, we’re looking at the buffered output, so the 74LVC1T45’s are “conditioning” the RP2350 output. If we looked at the internal signal before the buffer we might see a real freak show. We’re concerned about the output though and the FCC/CE lab is ready to start so I say let’s go!
Had a nice sunday afternoon bicycle trip, and my brain was happily coming up with more feature ideas for Buspirate 7:
Individually addressable pullups and pulldowns per pin. Essentially do away with the P-FETs for pullups and replace them with a second LVC1T45 per pin, this one just connected through a resistor to the BPIO going to the DUT.
Yes, I know you wanted to get rid of the io-expanders, but this won’t work without. It would either work with 595s or some I2C based ones like '9535. To switch the pullup/pulldown to off, you’d have to switch the LVC1T45 to input and it would be driving against the io-expander. So you’d need something like 47k or 100k resistors in between them.
An alternative solution to the individual LVC1T45s would be a I2C GPIO-expander that allows different bus and io voltages, like the TCA/PCA6416. Downside is that they are only specced down to 1.65V. But this would save space on the board, they are available in a 4x4 QFN. Since they have 16 channels you could even do pulls with different strength, like 10k and 470k.
What could you do with this?
For example automatically measure the impedance of an unknown pin on the DUT. Consider for example the classic example of UART where you aren’t sure where RX and TX is. Both are high, but the RX of the DUT is just a pullup and TX is push/pull instead.
With individually addressable pullups and pulldowns you could do this on each pin:
measure voltage with adc, decide if this counts as high or low
activate pullup/down in the opposite direction
measure voltage with adc again
calculate approximate impedance using ohms law, use some threshold to decide if the value counts as pullup or push/pull
Doing this could become part of some automatic safety routine that is shown as option in the menu before enabling some mode. To be used when talking to some unknown DUT or when you are unsure about correct wiring. To check if the impedance matches what makes sense for the protocol you are trying to use.
When looking over the schematics I figured that there is currently a problem in the schematics this would solve too:
Consider you set the VREF_VOUT to something low, like 1.8V. What you do with the pullups doesn’t matter. Now you connect a wire with low-impedance 5V to one or more of the BPIO pins. Now there will be a current flowing through the body diodes of the P-FETs into VREF_VOUT.
There is a 10k pulldown on VREF_VOUT. But if there isn’t much other load, or if there are multiple pins feeding voltage like this, VREF_VOUT could rise, even if it is fed through 10k resistors. A too high VREF_VOUT could destroy sensitive circuitry on other pins.
There could be some protection logic in the BP that could detect this case and shut everything down.
But switching the pullups to individual LVC1T45s or one TCA/PCA6416 would also solve it because they are both safe for up to 5.5V on each pin regardless of the IO-voltage.
This is also similar to how Glasgow does pull-x, as I recall. I am wary of the lower end of the power supply, but could get over it.
The 595s are banished However, adding an I2C bus doesn’t require an extra pin for each chip. We have one pin free and will gain one by freeing the pull-ups_en pin. We can continue to measure over_current through the analog mux, which is 3 pins free for I2C (the chip seems to have a handy reset).
If it is a very small chip, perhaps it will fit where the PFETs currently live. One on each side of the connector, giving two pull-up and two pull-down options.
I skimmed the datasheet and something I didn’t see is if it is partially power down compliant. Is it ok if the IOs are driven high while the I2C side is powered down?
If there is a cheap and plentiful 2 channel I2C PWM we can use to set the voltage and current limit, then we’d free two more pins that could support advanced pull-x and the second PSRAM chip mentioned above.
VOUT_VREF should be pretty robust to back powering. The 45s are all ok with partial power down. AMUX measures it through an op-amp with current limit resistor and diode to VUSB. What little gets through the 10K + PFETs doesn’t have many places to go. As always though, my work comes with free bugs!
The 1.65V lower limit is just for the pulls, so the regular push/pull IOs via the 1T45s would still work at 1.2V. 1.2V IO voltage levels aren’t very common, and if they are used, they are usually push/pull because at this low level you are quite susceptible to any EMI and other issues, so you don’t want to add to that by using open-drain. So I’m not really concerned about losing 1.2V pull capability.
When free pins are an issue: is there a reason why you are driving the LCD with SPI and not I2C? The LCD controllers usually support both. And with I2C you could just use one common bus and free even more pins this way.
As I wrote, it is a 4x4mm QFN:
They also have a BGA-option if you are feeling adventurous…
The abs. max. values and the recommended opteration conditions all mention absolute values, not referenced to power inputs.
Also they have this in the text: “When an I/O is configured as an input, FETs Q1 and Q2 are off, which creates a high-impedance input. The
input voltage may be raised above VCC to a maximum of 5.5 V.”
So I’d say this is supported.
Only thing to look out for is that the Power-On-Reset is tied to the IO-power level, not to the one of the I2C-bus. So you’d have to reconfigure the registers after enabling VREF_VOUT.
Probably internally the registers and all is managed in the IO-power-domain and the I2C power domain just powers a dumb voltage level translator.
Speed: there’s a lot of pixels and 24bit color depth, we also share the SPI bus with the NAND chip that also has the need for speed. Cost: this particular LCD is really nice (IPS, good color) with good supply and unbeatable price (~$3 in quantity).
I think I’ll delay purchase of the Bus Pirate V7 until spring/summer next year. I was anticipating some development issues, so I’ve delayed buying one for now.
Does the existing V5 or the future V7 have a serial emulator capability? Can it send/receive serial data from a pin (TTL level)from a simple script or better still a text file? Is there an ability to decode serial data? I’m looking for a simple pocket terminal type functionality. Thanks.
[[ @ian – this might be better in its own thread? is converting a post to a new thread an option? ]]
Can you help me understand what you mean by “serial emulator” and “pocket terminal”? Are you looking for a turnkey solution, or are you able to code your solution if the building blocks are there?
At a hardware level, the BP5 supports TTL level UART:
Example log of setting to UART mode
HiZ> m UART
Mode selection
1. HiZ
2. 1-WIRE
3. UART
4. HDUART
5. I2C
6. SPI
7. 2WIRE
8. DIO
9. LED
10. INFRARED
x. Exit
Mode > 3
Use previous settings?
UART speed: 115200 baud
Data bits: 8
Parity: None
Stop bits: 1
y/n, x to exit (Y) > y
Actual speed: 115207 baud
Mode: UART
UART>
The above example configures the IO4 as TX and IO5 as RX. Connect the corresponding probes to the TTL UART you want to interface with … seems like it should work simply enough.
If you’re thinking of a receive-only (or primarily receive) mode, where it displays what it receives on the BP5’s built-in screen: I’m not aware of such a mode, but the building blocks are there. Something along the lines of: Press button to enable the mode (and modify pixels to show it’s on). Dump bytes received from the UART to the screen (see UART mode above). Press button again to disable the mode (and modify pixels to show it’s off). Might even work without a computer, if you don’t need to send data interactively to the probed device.
For interactive / probing, you’ll still need a way to send the relevant commands to the BP (via it’s USB COM port), so a computer of some kind to interact will be typical, even if it’s a tiny battery powered one.
If you need real RS-232...
I realize you said TTL level UART; Still, if you expect to connect to “real” serial ports, may be worth noticing that there’s a dual RS-232 plank that’s just been released. It’s hot off the assembly line, but is a solution for true RS-232 support … whether you want to man-in-the-middle comms, or just interface with older hardware with classic 9-pin serial ports.
Solid building blocks if you’re able to use them.
If you do make a receive-only terminal that shows onscreen, it’d be great to see your contribution added to the repository. Good luck!