Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split MPU functionality to run alongside PicoGUS #27

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

sammargh
Copy link

@sammargh sammargh commented Jan 2, 2024

I want to preface this with a warning: web design is my passion!! Just kidding. But I'm not a very good programmer, so you are more than welcome to take my approach and re-implement in a different fashion.

I changed the PicoGUS firmware so that the MPU runs on a separate set of control/data addresses and added functionality into pgusinit so that you can interact with the firmware. The end result is I can now use the MPU firmware on one PicoGUS and then use a second card for the other various firmwares.

In pgusinit I opted to move from definitions to a constant variable which can be flipped by specifying /m at runtime. This adds capability to include more alternate card locations as there's enough address space to support 5 firmwares in theory at the same time.

@wbcbz7
Copy link
Contributor

wbcbz7 commented Jan 3, 2024

I did a quick look at the changes, and, in my opinion, there could be a better approach for configuring and running several PicoGUS cards in one system than using different control port address for different firmware versions.
As each RP2040-based board (namely, not the RP2040 chip itself but the SPI Flash chip storing the firmware) has its own unique 64-bit ID, which can be retreived by Pico SDK unique_id library, we could theoretically make use of it to distinguish between several boards in a manner similar to the ISA Plug and Play card isolation process. Unfortunately, the ISA PnP isolation relies on devices snooping the bus during ID I/O read process, which may not be easily implementable with current PIO scripts, but the similar idea can be implemented with different approaches.

For example, during startup, each card calculates a "hash" of its Board ID - for this example, let it have a 8-bit length (256 possible values). The configuration utility puts all cards to the "isolated" state, then starts from hash value 0 and writes it to the "Hash Select" register. All cards, whose hash value is equal to the written value are "activated", responding to configuration cycles. We then read out the Board ID from the card via another register, calcualte its hash and check with current hash value. If they are equal, we can assume that there is only one card driving a bus and no bus conflicts are seen. In this case, a (linearly incremented) Card Select Number is assigned to that card, which can be used for further configuration cycles. This is repeated for all 256 possible hash values.
Obviously, this approach may cause collisions (for 8-bit hash, we could have at maximum 2^8 = 256 cards in one system, and practical limit is much less - this could be redemied by using several hashes or adding a "fixup" value for the Board ID prior to flashing the board with a PicoGUS firmware), but it's relatively easy to implement and needs no hardware changes to the PicoGUS board.

I might probably research that and make a proof-of-concept a bit later, so apologies for possible mistakes :)

@polpo
Copy link
Owner

polpo commented Jan 3, 2024

wbcbz7's first idea is what I've had for a long time: use the flash ID on the Pico and a method inspired by the ISA PnP card isolation protocol. I've wanted to implement such a thing but I haven't had the time to do so. I'm pretty sure it's possible with the current bus PIO program.

The 8 bit hash method sounds a bit easier to implement and would probably solve the problem effectively!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants