[Discussion] Use of AI/LLM in the retro community

  • Nominations will close March 25th. If you'd like to join the board and influence how TinkerDifferent runs in the next year, put your name in now!
  • Hey Guest, MARCHintosh 2026 is upon us. Check out community projects, join GlobalTalk, and have fun!

eric

Administrator
Staff member
Sep 2, 2021
1,200
2,056
113
MN
bluescsi.com
This is a hot topic so please be respectful - you can have positive and/or negative opinions about it but there is no reason to personally attack or be rude to someone who has a different opinion.

There have been quite a few projects lately that have been either fully or partially done by an AI/LLM in the retro community - it seems that trend is only increasing.

What do you think about the use of LLM/AI for retro projects?
 

eric

Administrator
Staff member
Sep 2, 2021
1,200
2,056
113
MN
bluescsi.com
I'll kick it off:

My opinions of LLM/AI have changed quite a bit over time (and likely will keep changing). I both think it's an amazing technical achievement and it's advancements in it's ability to code especially have been a boon for productivity of software developers. On the other hand it seems it could cause the downfall of civilization.

Linus Torvalds had a few interesting takes on LLM usage, to paraphrase two I can recall (I'm trying to find the video) : Cat's out of the bag and it's not going back. It's similar to when compilers came around and developers no longer needed to write ASM.

LLM/AI currently only work (well) when the user knows or can evaluate the answer that is given. If you ask it a question you don't know and it gives you the wrong answer and you don't know how to evaluate if it's true or not - then it's abosolutly useless.

I kind of liken it to someone using an axe to cut down a tree vs. a chain saw to cut down a tree - both will cut down a tree, one will take a lot longer. If you don't know how to use either you'll still be faster with a chainsaw but you also might cut off your leg or fall the tree on your house.

RE In retro - Learning ANSI-C or ASM is fun and intersting - but then you actually want to learn that for some result/app/etc. If you never grew up with that gaining a life time of experiance isnt feasable for a small hobby. I personally don't mind if a project is done with an LLM if it's useful and works.

One thing to note too is that I've seen some vibe coded paid for retro software. While that is their choice the barrier to entry is so low now that your paid vibe coded software could be vibe coded by someone else in an afternoon as well. It's really an intersting time where the barrier to entry is almost zero. I'm not sure what that will mean for the future.

Of course this space is changing almost daily so my opinions can/have/will change.
 
  • Like
Reactions: Kai Robinson

eric

Administrator
Staff member
Sep 2, 2021
1,200
2,056
113
MN
bluescsi.com
This is a pretty interesting use of an LLM on a G4 (and maybe even Mac Plus?!)

I Put Modern LLMs on a 2002 Macintosh - No Internet Required



On a hypothetical Mac Plus: AltiVec OFF, FPU OFF (SANE), 2MB arena, single-layer paging, 64-token KV cache, Tiny model only.
 

MacOfAllTrades

Tinkerer
Oct 5, 2022
201
227
43
Thanks for starting the discussion in a post!
Full disclosure - I recently spent a few weeks porting open sourced linux doom over to System 7 specifically targeting the SE/30 . Community has been generally positive. A few negative responses over in a thread on 68kmla related to AI’s use in making it. It’s my first time using AI (in this case claude code) to help make a vintage mac app, and have made vintage mac apps before.

Opinions follow:

## Trusting AI-generated code
Inherently, anything AI-generated is less trustable than something human-created. Not because AI necessarily produces worse code than average humans (it may or may not) but because the kinds of mistakes that AI makes (be it in pictures, algorithms, logic, code, bugfixes) are simply different than the mistakes humans make.
So if this matters in the application - strange failure modes might be unacceptable. Take two examples:
  1. A System 7 application that is a game (let’s say Doom)
    If there are weird failure modes made in the AI’s code that are unexpected and unlikely, compared to human code, it’s probably alright. I know of no one using Doom to make safety-critical decisions or workflows such that it is paramount to comprehend and control fail cases such that AI-induced failures would be of major concern
  2. A self-flying plane codebase
    Clearly a safety critical application. The stakes for failure are tremendous. The safety argument for the overall system is underpinned with safety analyses that include things like fault trees and hazard + risk analysis, etc. Basically — understanding and mitigating failure modes (be it from the software or otherwise) is a must for being able to maintain and justify the safety of the system.
My point is that you certainly *require* trust in the code in #2 and you *do not require* trust in the code in #1. Therefore the bar to acceptability of AI code is much lower in 1 than in 2. But it’s about software and the safety-criticality of the appplication that makes the difference. And I’m not saying you can’t have item #2 with AI code in it - but you’d want to do a lot more than vibe code that’s for sure.

Rounding back to our Hobby of retro tech - The whole thing is pretty much full of non-safety-critical applications (think like #1 above).



Ok so that’s trust.. what else matters in our hobby?
## Code accessibility
Another thing that AI induces is a “Comprehension Debt” by its developers. So if not handled well, the developer(s) of a project will quickly lose touch with understanding how it is all the code they are pumping out works — innate system insights are lost or never developed — and being able to make changes becomes harder and harder as this Comprehension debt expands. Take Doom-SE/30 — Of the code that was changed beyond the initial linux-doom it was based on, I wrote maybe 1% of it by hand. The rest was done by Claude’s models. If it matters, and I think it does, the 99% written by claude was “managed” by me. It wasn’t just “go” and come back in 14 hours to find the game baked to where it is now. There were issues, bugs, plans, fails, iterations, arguments with the model, etc. I do not claim that this makes up for the Comprehension Debt but it is different than “go” and hitting publish.
But back to code accessibility — in so much as a retro tech project hopes to-, expects to-, or ultimately benefits from- community handoff and contributions now or in the future — it would certainly behove the codebase to be understandable because the first step to extending it later (especially by others) depends on the codebase being accessible, or rather “comprehensible.” So if AI-built code incurs a comprehension debt, and when there is hope or desire for the code to be carried forward by the community in the future (or heck even by the original author after they’ve forgotten key details) then there is benefit in the produced code to be comprehensible.

A point here is that there are TONS of software products that do not have any source code available but whose value is still high. I don’t have the source code to SimpleText, but I’m glad the program is there and I get a ton of value from it. Same could be said about the vast majority of programs we use for our retro hobby - if for no other reason than because most of what we use is old closed-source products. Put another way - What’s the difference between:
  • High-comprehension-debt AI-based apps for our retro machines that work
  • Low-comprehension-debt based apps for our retro machines that work
Yes, one is ultimately better than the other in absolute terms. But if they both work —is the typical hobbyist going to sit around saying “I’m so glad the one I’m using exhibited low comprehension debt by its authors” ??


Don’t get me wrong — there are _certainly_ examples of open-source projects in retro tech communities where comprehension debt and accessibility to the code DO MATTER. For example - the BlueSCSI project, itself a fork of the ZuluSCSI project, is a community-supported endeavor. Here I’d say it’s extremely important that Comprehension Debt be managed (AI-induced or otherwise). And using AI tools to “vibe code” (for lack of better term meaning using AI coding in a way that produces high-comprehension-debt in the code produced) this sort of project comes at great risk because that debt will work against the broader effort and advancement of the project medium- and long-term.

Key question others might know: How are big famous open-source projects managing coding contributions when it comes to AI-assisted changes? How’s Linus Trovalds handling it with the open sourced parts of linux? He’s still the big MR approver on major changes…. Maybe someone here knows? Or @eric how would you look at submissions to BlueSCSI repo that are AI-assisted? Have you set guidelines? More curious - not saying that you should or shouldn’t.

## Other relevant considerations (I assume no right/wrong stance on these, but they might feed thought and conversation)
  1. Alternate workflow and mindset frameworks for considering AI-assisted retro community code
    1. Given the documentation I produced and included in Doom-SE/30 (see PERFORMANCE_IDEAS.md and OPTIMIZATION_HISTORY.md and the Source Code and Port manual file) one could argue that the real value of the project long term is in having these records of HOW the port was done, WHAT optimizations were tried and their results, and FUTURE WORK. In many ways it strikes me like a research article in academia. It didn’t productize the work, but it did present the results of enough prototyping to get it working. Someone could look at it, and the codebase, and the docs I cited, and advance it. Heck with the codebase and those documents someone could go and re-implement the optimizations by hand in (hopefully) a more efficient way (better sw performance) as well as in a way resolving the AI trust/comprehension issues. Or in 30 years they could just feed that info as part of the information prep for re-doing the port from scratch (with or without AI).
      Does providing that extra documentation help make things better? I provided it for both that reason and in order to just keep a record of what I tried. AI is not intelligent and having records helps keep it in check when it forgets what was done in the past.
  2. The “Who cares, if it works?” mindset
    1. If you always wanted Mario on your Atari, and no one ever made it, and you can make it and do so with the help of AI --
      How much more really matters? (possible answers follow - none do I claim to be my position)
      1. We don’t want AI-slop programs in the retro tech community, so even if it gets us programs and functions we’ve wanted and didn’t have, I’d rather not have them if it risks or has AI issues
      2. I’m glad to have it at the cost/risk of AI-induced issues, but I want it treated “different” somehow in the community. Maybe it’s a naming convention, maybe its that its collected differently in sites like `macintoshgarden.org` so we know it’s AI “tainted” somehow.
      3. Nothing else matters — we had no Mario on Atari, and now it’s here - yay.
    2. Note that the subbullet 2 above is interesting but brings loads of concerns. Who governs this / decides when something has “enough” AI to count? Who enforces this? What about folks who lie or just don’t tell people that they used AI?

Anyway - this post is long enough but I was excited at the prospect of this sort of convo in our community. It’s a conversation that should happen throughout not only coding industries/communities but throughout society. So if you’re in this convo and have something to say - welcome - and note that you’re thinking about questions that the entire human race has to ponder and act on. Pretty cool to be part of something that large and important.
 
  • Like
Reactions: Kai Robinson

XodiumLabs

Tinkerer
Oct 25, 2021
69
125
33
South Bay Area, CA
xodium.net
Personally, I'm one of those people who run to retro tech to *escape* the woes of modern tech and what AI is currently doing to it, so on one hand I'm not really a fan of bringing it to retro tech projects. It's kinda like moving to avoid your ex and they end up just moving to where you moved to in an attempt to put some distance between them.

I do understand AIs help with some retro tech coding things, but to put it in the words of a friend of mine...it's like, uh, certain explicit things you can do to yourself. Great if that floats your boat, but I would prefer not to hear about it. Tell me about what YOU did with YOUR project, don't ell me about how Claude is so awesome for helping you out.

That said I do think AI is great at some things, and generally running local models on devices you own is relatively harmless and that I don't really care about. But trying to welcome the big LLMs like ChatGPT and Claude and such into our hobby and let them run rampant? Eh. My opinion lands more in "could we not, please?"

I also did write something on my own site about how I view AI stuff, if you want to read that.
 
  • Like
Reactions: Yoda and Trekintosh

Trekintosh

New Tinkerer
Dec 31, 2024
47
23
8
Ai image generation has absolutely no place, full stop. Code has a measurable input and output. While there is an artistry to making it, at the end of the day, you can measure if it works or doesn’t works. (Personally I resent AI code writing entirely, but I can at least *see* an argument for it).

image generation, on the other hand, is entirely repugnant. Visual arts are *not* measurable. Any time you slop out a logo or album cover or what have you, you’re taking effort and joy directly from a human experience. I would unironically prefer MacPaint scribbles to any kind of slop image. “But it’s a placeholder!” So is scribbles and they’re not going to be accidentally left in.
If you need art and really don’t want to do it yourself, try Fiverr or just asking around. Theres tons of artists basically idling at all times who will sell their work for dirt cheap.

as for writing, like your “sell sheet” for your project etc? I can’t really say you’re taking money out of an artist’s hands because it’s just you, however, I hate the vibe of slop text. It’s like having a Microsoft apology letter whispering directly into your brain, so I prefer not to see it.

P.S.: if you say you “put ChatGPT on a retro computer” and it’s just a web portal, you should feel bad. Go whole hog like the video above, or be honest.
 

alexhoopes

New Tinkerer
Jul 1, 2025
2
1
3
This is a pretty interesting use of an LLM on a G4 (and maybe even Mac Plus?!)

I Put Modern LLMs on a 2002 Macintosh - No Internet Required


Awesome to see my project shown here, thank you so much!

While I used to not really care for AI, my mind changed when I was working an IT job at a large company and they got into how companies are really looking for AI knowledgeable engineers and how over the next few years it’ll be a vital skill to have in the work field. I wrote my first program when I was working as lab technician at Intel, automating server deployments and registration to our software and I’ve been extremely interested in becoming an engineer since. Programming kinda just clicked for me after I taught myself it (2020, before the AI takeover lol) using YouTube, reading documentation, and asking questions on StackOverflow.

I recently moved out of state and have been looking for a job, and as IT jobs haven’t landed so far, I’ve wanted to go full force into applying for an engineering job somewhere.

I know that as you said AI is very controversial in this space and I was a little worried about how the community would react to my program but as I’m 23 with no degree, I needed a solid portfolio piece though. Knowing how much AI matters to modern companies, I knew this would be a headturner for recruiters and would be a great interview talking piece. Going to update my resume and start applying for jobs this week and see what happens :)

Again, thanks for the share. Very cool to see what I made posted on a forum I visit and learn a lot from.

This community rocks!
 
  • Like
Reactions: MacOfAllTrades

MacOfAllTrades

Tinkerer
Oct 5, 2022
201
227
43
This thread is the mode of folks sharing their opinions openly right now -which is great. Keep ‘em comin’!

More food for thought:
# Analogies
## Programming Analogies
- **Compilers replacing assembly** — moved developers up a level of abstraction
- **IDE/autocomplete evolution** — context-aware suggestions vs. basic completion
- **Calculator for mathematicians** — handles tedious work, frees focus for higher-level problems
- **Integrated Stack Overflow** — instant solutions without context-switching

The key thinking point on all of the above is that the more you had to do with the code you are producing (or verifying etc) the more you learn about it and any system it fits into and losing this is a detriment to everything except short-term time-to-delivery. But yet I don’t think the hit in knowing assembly has greatly set back the mass of programmers in the last 30+ years. I’ve written assembly - but it’s not a skill I polish often and it’s certainly not a skill where proficiency would likely translate into better results overall — Namely that the rate of code production would suffer so greatly that any gains in performance would be outweighed. Furthermore, it’d be a setback in portability / reusability of the code. You can’t do (or it’d be way harder to do) certain things in Assembly that you can do with fancy C/C++ mechanisms. So things many programmers gets from shared libraries and that sort of portable generic code would be diminished or lost if we all wrote in assembly.

## Non-Programming Analogies
- **GPS for navigation** — guides the route but you choose the destination — The skill of knowing your streets and routes has long become endangered since GPS turn-by-turn nav became commonplace. I am impresed when I meet someone who is still on top of this skill but it’s one that I’m willing to mostly let catch dust in my personal toolbox.
- **Power tools in construction** — amplifies capability while requiring skill and judgment. Def have more artisinal respect for a woodworker who made a furniture piece without power tools. But I don’t hold the value highly that we all should strive for hand-made hand-tool-only furniture nor that there is more utility in such furniture. In fact a mass-produced piece of furniture probably can help more people than a fancy expensive hand-tool-only one does so I think it’s value to society is greater. But I certainly think a fine hand-tool-only piece of furniture is ’nicer.’
- **Spell-checker for writing** — catches errors but doesn't create the content so it’s not the same capability. But I def. don’t assume people’s innate spelling capability is represented in an email or text because autocorrect and spell checking exists. And quite frankly when I do see misspellings in such informal writing I assume it’s from fast typing on a keyboard and not representative of their intelligence. Now if it shows up in a Legal Brief or filing then I do think the person is less-capable etc.
- **Photoshop for image editing** - Changed the game on what graphics artists could do and how long it would take. But now we rarely think “Oh they used photoshop before publishing that picture!!!?!?!?”
- **HDR on Photography** - I was at an art fair and saw some cool canvas-printed photography works. I asked if they used a computer to enhance it and they sheepishly admitted they did (HDR/saturation adjustments, etc etc etc). It did make me think it somehow meant less photography talent.


#Exercises
## Quality & Trust Exercises
- **The authorship reveal** — your favorite program was 80% AI-written; does this change your perception of its quality or reliability? Short answer for me: Yes. I’d lose trust in it because I’d assume that it was less-likely to be verified thoroughly and less-likely to have had its issues observed. Whether I _care_ about this perception change depends on whether I needed quality / reliability and how much out of the particular program. Note this aspect is what makes me skeptical when I hear “stocks are falling in traditional software companies because people expect AI to displace their products” — I don’t see it happening with the current AI capabilities and business dependence and need for reliability in the enterprise-grade products they spend big money on. EXAMPLE: I don’t think CNN will cancel their subscription to Microsoft365 because they hired an engineer that used AI to make a suite of Office-like programs for CNN to use from now on. I just don’t see companies taking that risk -


## Skill & Learning Exercises
- **The explanation challenge** — AI generates a solution you don't fully understand; do you ship it, study it first, or rewrite it? — I’d say depends again on what it might be used for. Safety-critical: Hell no you don’t ship it. Campy personal project - Up to you. With infinite time - definitely study it. Enterprise-grade product where maybe it’s not safety-critical but you’ve got large contracts on the software with customers - you definitely don’t ship it.

## Value & Craft Exercises
- **The creative contribution question** — what percentage of "your" code can be AI-generated before it stops feeling like your work? Very interesting question — I’d extend this further out — how far removed can you be from the coding of a software product before you can’t claim a primary role in its creation? This happens all the time with “<such and such> is the father/mother of <some product>”.
- DeLorean was the father of the Pontiac GTO
- Steve Jobs created the Macintosh (/iPhone / iPad / Apple I, Apple ][)

We already have a societal track record of accepting crediting project- / team-/ company- leads as the ‘creators’ of them.
So if you said “Bob used Claude’s artificial intelligence when he made his new program” it’s somehow less valid (in terms of assigning the credit to Bob) than when you say “Steve used Andy Hertzfeld’s actual intelligence when he made the Macintosh” (something we all know but regardless accept when Steve is broadly credited with making the Macintosh).
- **The craft vs. output dilemma** — do you value the process of writing code, or only the end result of working software? This has lots of human psychology in it, too. It’s part of “well they found a way to sweat less to get this done, so I don’t want to give them as much credit.” Certainly I agree that if it’s cheaper to make something, then that ought properly impact the market (i.e. reduce its final cost to the consumer) but I don’t give a BMW model less credit if between this year and next year they find a way to turn a step in the factory from requiring human assembly to becoming robot-assembled.).
 
  • Like
Reactions: Kai Robinson