Hmm interesting approach with the vertical cards, both as suggested by mg.man and Trash80. Current layout doesn't support anything but the vertical PDS extension right above the existing one though. There's not enough room in this critical dimension in the layout:
Everything would have to be rejiggered a bit to let the PDS bus go upward.
But isn't it better to just extend the card rightward to allow for new peripherals to connect to the 25 MHz fast bus? That would be better:
This is all impossible to implement in the current layout though! The routing would have to be totally redone to bring the fast bus over to the left side of the card. See this diagram:
The entirety of the address bus (red) would have to squeeze through the little space between the control signals (blue) and the data bus (yellow) and then all three would have to go over to the left side of the card. Right now only a few address bus take that route up from the MC68k to the ROM. The current placement doesn't really have enough extra room to run the whole address bus through there. So other than the possibility of a stacked PDS card, I think the addition of the slots is a bridge too far for now.
ScuzNet is okay but I think it would be better to leave it as an external device made by someone else. I am a big fan of the recent SCSI developments, particularly BlueSCSI, but I personally am skeptical of myself/GW shipping something with SCSI. The issue is that all of the new-manufacture SCSI devices (including ScuzNet, RaSCSI, SCSI2SD etc.) are out of spec on either SCSI bus drive strength or edge rate. Some use chips which are not specified to drive two big terminators on either end of the bus and may not reach a low enough "0" level. Others use powerful buffers but which transition from too quickly, creating a risk that the fast signal edges may echo in the drives as they go past, screwing up the received signal. You can see in the evolution of the SCSI2SD v5/v6 designs over the past two years, the designer had at one point added ferrite beads in series with the SCSI drivers to try and address this issue, then removed them, ostensibly because they caused more problems than they solved. The question is, with those 74LVT-series buffers on the ScuzNet putting out edges 10x faster than the SCSI spec allows for, does it really work reliably at the maximum bus length, with 7 drives on the bus, and using a crappy cable? With shorter cables and fewer drives, problems are less likely, but historically there were times where manufacturers such as Adaptec made SCSI ASICs suffering from this same issue and had to fix it in one way or another. Now that this stuff is "vintage," I don't think people are hooking up big long SCSI buses with 7 drives where the signal integrity issue is most likely to manifest, hence why the issue has flown under the radar for so long. I have not been able to come up with a buffer chip/design that has the requisite 5 nanosecond output edge rate while also being able to output the 48 mA of current required to drive the large SCSI terminators. So right now I have sort of a moratorium on SCSI device development because I would rather not make a less-than-industrial-strength product where it's possible to push the envelope in a way that should be supported but it's flaky.
Maybe we can make custom low-profile PDS cards for stacked connector approach. Rather than a whole full-sized PDS card, they'll be thin miniature cards designed to go here:
I think I could fit a video card or network interface in that space. The key idea here is that in designing new cards, we can make sure they aren't too thick so as to hit the chassis in the way that a legacy SE PDS card might.