[geeks] LLM (AI) Some sort of Thread

Joshua Boyd jdboyd at jdboyd.net
Mon Dec 30 19:25:39 EST 2024


For the retro theme  (even though this is rescue), allow me to point out 
2 interesting examples:

The first is sadly only on the Xitter as far as I've been able to find:
https://x.com/mov_axbx/status/1749374268872311295
Running a large language model trained for stories on an Indigo 2. If 
the code is executing a FP16 version of the model, I imagine there is 
more performance to be had.  Follow up post from the fellow here: 
https://x.com/mov_axbx/status/1749668966455206340

Here is a different stunt a company did:
https://blog.exolabs.net/day-4/

This time they are using a Windows 98 Pentium 2 PC, so it should work 
fairly well on other 90s systems I would imagine.  It looks like they 
are using a very straight forward, unoptimized C implementation for 
running the model on an old primitive Borland compiler.

Ultimately, vintage machines might be more hampered by lack of 
instructions to support this work than they are by raw speed.

On the modern home front, Mac ARM hardware is proving to be very good 
for running a lot of local LLM options.

Some people call it auto-predict on steroid, which I think is in some 
ways a reasonable way to think about it, but then it does a good job at 
identifying things that require a tremendous amount of work to hand code.

I keep running into a lot of opinions expecting LLMs to be able to 
reason more, say by performing complex math (or simple math), but I 
think the key is likely to be more towards getting them to work with 
other tools.

One of the keys to responsible LLM usage is to only use it for things 
you can verify.

Is anyone interested in more talk about this topic?





More information about the geeks mailing list