The Amiga Pointer Archive is missing one: We went to get a Fast Ram upgrade for a A1000, and we looked at his work, and it was cross between a jeweler and sim-city.
We had the work done, and he put his boot disk into test it. He had the most interesting pointer... the boot disk was called Romeo, and the pointer was just a 3x3 square diamond, with a single line of pixels, to a 2x2 square diamond of pixels with a hole on the center... so you could literally see a single pixel through it. I had never seen anything like it, so I copied on my Macintosh, ( System 6.0.8, and system 7.0.1 ), and later to windows 3.1 onward to Windows XP, later screens became too large to need anything close to a single pixel pointer.
I wish I had the disk, that we got it from... just a single 3.5" Floppy labeled "Romeo" and we all knew what it was.
I scanned all disks, I've only seen "Bravo Romeo Delta" (a 1992 game) and a Romeo Knight music disk. But the site has a pointer editor, did it look like this?
What's FortNight? I tried looking it up but got fortnite as the top result, and forcing a literal search with quotes just brings up the dictionary definition. Sadly I don't know of a way to do a case-sensitive web search
Technical feedback:
Every single announcement, like compression needs the addition of the lower limits of machine requirements. if a 64Gb model is compressed 224x times, should that not be able to be run on a 292mb video card?
No, the compression result doesn't mean the original 64 GB model can run on a 292 MB card. The teacher model isn’t the thing thats compressed. It still needs to be loaded during training.
What gets small is the student. The tiny head trained on the teacher’s first layer fields. That head ends up a few MB because it's not a transformer at all. It's basically a lightweight function approximator that reproduces the teacher’s behavior on the specific task it was trained for.
So training still requires the usual multi-GB footprint. (Which can be done offline) After training, inference with the student requires only the head. That's why inference is cheap but you can't load the full teacher into 292 MB of VRAM.
That's exactly what I was trying to infer from the abstract which sadly doesn't explicitly calls out memory requirements. I assume it increases inference time by getting rid of transformers. What's the memory requirements then ?
Edit: they claim these somewhere in the doc:
> Memory
Teacher model: multi-GB (entire model must be loaded)
AN1 head: a few MB (only head needed after training)
I find the claims surreal, can't wait for someone to validate this or I will do it myself. It would have been handy to upload such "few MB" weight file distilled off llama 70B so that we can see for ourself the 220x inference and in memory model size compression is true.
The memory story is actually much simpler than it looks.
The teacher still has to be loaded at training time, so the footprint is whatever the original model uses. Again, the compression doesn't shrink the teacher. It produces a small student head. After training, the teacher is no longer needed and the student runs by itself. That's why the inference footprint drops to a few MB.
It doesn't increase inference time at all. It removes transformers entirely from the inference path. The student computes directly on the layer-1 field, which is why it's so small and so fast.
On the request for a distilled “few MB” head for Llama 70B,that part is already reproducible right from the repo. The head is always task specific, not a general LLM, so uploading a single checkpoint wouldn't tell the whole story. The better path is to run the extraction script and train the head for any task you want. The pipeline is fully open, end to end. I'm looking for people to validate it independently.
If you need anything else cleared up, just let me know.
This is cool, but the renormalization and (Programmable and bidirectional) barrel shifter are of much more interest.
I had a 10Mhz XT, and ran a 8087-8 at a bit higher clock rate. I used it both for Lotus 1-2-3 and Turbo Pascal-87. It made Turbo Pascal significantly faster.
I wish I had the disk, that we got it from... just a single 3.5" Floppy labeled "Romeo" and we all knew what it was.