6152 Crossbow Adapter
@8ff7h "RISC coprocessor card"
Back to 6152 Academic System

System was housed in a Model 60, 1MB standard on the 6152 planar.

6152 "Crossbow" Adapter Base images courtesy of William R. Walsh. 

J??,18 DIP headers for ??? 
P1,2 Headers for memory card
U3 90X0781ESD
U4 6298252
U5 05F3551ESD
U7 83X2761ESD
U8 83X2791ESD
U9 58X4276ESD
U17 23F7481ESD
U22 MC68881RC20A
Y1 29.4912MHz osc
Y2 20.000MHz osc

Y2 is probably the MCA bus oscillator
U22 MC68881RC20A Floating-Point Coprocessor

6152 Memory Daughter Card

U1 05F3130ESD U6 BELFUSE 0447-0015-A3

   There are three different memory sizes available, 2, 4, and 8MB. From this, I posit that the Crossbow supports 256K, 512K, and 1MB 30 pin SIMMs. Further, note there are five SIMMs per bank. I believe that one SIMM per bank provides ECC, like that on the 7568 Gearbox systems.

SIMMs in the great William R. Walsh psychadelic retro machine were TI TM4256OU-12L with 9x TMS4256FML -12 chips per SIMM.

Multiple 6152 CPU Cards
|> In comp.sys.ibm.pc.rt article <F?.1.n91@cs.psu.edu>, ehrlich@cs.psu.edu (Dan &) wrote:
|> :
|> :The Dec. '88 Release of AOS 4.3 supports 2 CPUs in and RT 6152.  I do not know if IBM every released any instructions although I vaguely remember seeing them drift by some where.  If memory serves, one also needs a modified version of the 6152 configuration diskette (the one with the diagnostics) so the bus addresses of the second CPU card can be set.  One CPU could be used to run the X server leaving the other for more useful computations.

|   Is there any other configuration possible?  I'm somewhat more interested in distributed computing applications, and doubling your processing power is  always nice.  :-)

Bill Webb
   Aha, somebody finally noticed that! I had intended to post this quite a while back but as usual, until somebody brought it up I had forgotten about it. I had sent out a beta-test of these instructions which was probably what Dan had seen, but hadn't heard back anything so it just slipped until now.

   Please note that the multiple cpu stuff is completely UNSUPPORTED by IBM - it was a personal project that was sufficently far advanced to get code into the product but was much too late for the documentation and testing required for a supported feature. This feature is not exactly secret as it was demo'ed at the fall 1988 COMDEX as a technology demo with the two Risc bus masters running BSD 4.3 on top of OS/2 (sigh).

   There is also a paper that I'm going to present at the IBM internal Unix conference next week, as it is not IBM CONFIDENTIAL I hope that I can also post that here as well. In any case, if anybody goes attempt to get a multiple-cpu system going please send me email with the results.

   In any case here are some notes on how it is implemented followed by the instructions for how one builds a multi-cpu IBM 6152 system. Enjoy! Again - this is NOT a suported feature - use at your own risk etc. etc.

Multiple CPU Architecture for 6152
- processors run independently, each runs its own copy of IBM 4.3/6152.
- primary cpu (cpu 0) owns real devices (lan, printer, tape, asy, etc)
- all cpu's share disks (HD, FD, optical) but only 1 cpu writes to a given HD partition.
- supports 1 or 2 additional processor cards, easiest case is 2 (total) cards each of which has its "own" disk.
- access to "other" processor's disk is via read-only mount, or via NFS.
- I/O support program (unix.exe) runs under DOS or OS/2 which handles disk and most other I/O.
- based on PS/2 model 60 and runs on model 80.
- a software "microchannel" device implements a network connection between cpu's. The primary cpu acts as a gateway to connect the secondary cpu's to other machines.
- Unix drivers exist to allow other processor's memory to be accessed and the CPU's to be controlled.
-  minimal Unix kernel changes from latest ship kernel.

-  cpu bound jobs run with no appreciable degradation compared to conventional 6152's (each processor has 2, 4, or 8 meg of private memory). Processors MAY have different memory sizes.
-  I/O bound jobs compete for the same resources such as disk and are degraded, though total I/O thruput is higher. A model 80 helps as the 286 cpu becomes a bottleneck.
-   kernel compile that took 53 minutes with 1 cpu, took 30 minutes with 2 and 30 minutes with 3. It appeared to be disk bound with 3 cpu's.
-  sharing of work between processors was done at a high-level with a tool "mc" that one specifies as the C compiler to make (e.g.: make "CC=mc cc" )
-  my setup is to use X11 window manager and have a window for each secondary cpu.
-  many X benchmarks give almost 2x performance when the benchmark and server run on different cpu's.

DOS implementation details
-       one copy of unix.exe runs, with various structures changed to have one-per-cpu. Implements a Hot-key (to switch the keyboard, screens, mouse, and speaker from one CPU to another).
-       halt/reboot requests are no-ops from any cpu but cpu 0.
-       code provided to allow secondary cpu's to be automatically started from primary cpu as part of normal boot sequence.
- "main loop" looks for requests from both cpu's; generally services each in turn.
-  about 1 week's effort to get first version working; about 1 month's effort after that to fix bugs and add necessary features.
-       code is in shipable state.
-       runs on DOS 3.3 and has been run (once) on DOS 4.0

OS/2 implementation details
-  one instance of unix.exe runs per cpu; runs in own full-screen session
-  uses standard OS/2 hot-key to switch between cpu's and other OS/2
-   runs on OS/2 1.1 (PM) and 1.0 (disk performance much better with 1.1)
-  microchannel driver implemented using OS/2 shared memory segment.
-  disk performance is good, network performance and "microchannel driver" performance needs work (about 10x slower than DOS version).
-  needs additional work to be "proper" OS/2 program (totally event driven), currently uses polling for requests from RISC processor that would best be event-driven threads. This would help performance and reduce the drag on background tasks.

-  second cpu is very handy for kernel develepment and performance measurements as one can be working on code on the main cpu and running tests or debugging on the other cpu. I've found it nice to be able to recover from installing a kernel with a fatal bug from home without having someone on site.

Instructions on how to build a multiple-cpu 6152:
-       assumes that you already have a 6152 with one cpu
-       obtain an aditional cpu (possibly by removing it from another 6152)
-       install the December release of 6152 system (or at least the kernel, boot, and unix.exe from December), you will also need the December /usr/bin/X11/Xibm as it knows how to save the screen on hot-key events.
-       install the new @8ff7.adf file onto the reference disk working copy (e.g. via doswrite -va @8ff7.adf):


AdapterId 8ff7h

AdapterName  "RISC coprocessor card"

NumBytes 4

NamedItem Prompt "I/O port"
        Choice "01e0-01ef"
                io 01e0h-01eFh
                int 7
                arb 14
        Choice "01f0-01ff"
                io 01f0h-01fFh
                int 7
                arb 12
        Choice "200-20f"
                io 200h-20Fh
                int 7
                arb 10
        Choice "Disabled"
"Default I/O address is 1e0. <Disabled> disables the adapter."

-  install additional processor card, and use the reference diskette to autoconfigure the system. If the two processors have different amounts of memory the first one (port 0x1e0) should have the larger amount of memory, as that is where one usually runs the X server.
-   the simplest installation will have two processors, each with an HD for its own use (you will need a root and a swap for each processor, but /usr can be shared, either via mounting it read-only, or via nfs).

        For simplicity I will assume that each disk has the normal root, swap, and /usr partitions.

-  create two new host names, e.g. if the original system was 'frodo' then create frodo-mc0, and frodo-mc1. frodo-mc0 will be the gateway machine for cpu1 to the rest of the world. (You should use your local naming conventions if they are different from what we use).
-       the hd1 disk can be created by cloning the hd0 disk, e.g. use fdisk and minidisk to create the DOS/BIOS partition and the unix partitions respectively and then copy the unix partitions via the normal newfs/dump/restore mechanism.
                newfs -v hd1a
                mount /dev/hd1a /mnt
                cd /mnt
                dump 0f - / | restore rvf - 
                cd /
                umount /dev/hd1a

                newfs -v hd1g
                mount /dev/hd1g /mnt
                cd /mnt
                dump 0f - /usr | restore rvf - 
                cd /
                umount /dev/hd1g

-       change /etc/rc.config on hd1 to reflect the new hostname and network address, e.g. change network and hostname entries to:


-       on the hd0 disk, add the following lines to rc.config, so that we make frodo into a gateway (this may require allocating a new network number for 'frodo')


-       now you can boot up the system. Note that when unix.exe starts it will tell you that you have 2 processors. The first processor should now be able to come up normally. Once it gets to the login state you can hot-key to the second processor (control-alt-esc), and boot it. It should come up on the second disk by default (e.g. boot hd(1,0)vmunix.) 
-       if everything has worked properly you can add a line to /etc/rc.config so that the second cpu is automatically started when the first is about to go multiuser. This is done by specifying:


        in /etc/rc.config.

-       if one wants to reboot the second cpu, one can do so by first halting or rebooting it (e.g. /etc/halt or /etc/reboot), and then issing the following commands (on cpu 0):

                /etc/por 1
                /etc/load /boot
                /etc/ipl 1

(note that you must put the 6152 version of boot into /boot rather than the RT version).

-       note that messages about the state of the second cpu are displayed on the console of the master cpu so that one can determine that has halted or attempted to reboot.

-       note that if your configuration file doesn't have the line device mc0 at iocc0 csr 0xffffffff priority 13  then it will need to be added in order to send packets between the two Risc cpu's.

The above views are my own, not those of my employer.
Bill Webb (IBM AWD Palo Alto), (415) 855-4457.
UUCP: ...!uunet!ibmsupt!webb 

9595 Main Page