Rendered at 19:54:04 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
e7h4nz 11 hours ago [-]
I really resonated with the end of README.md.
I have a huge stack of embedded development boards at home—all kinds of SBCs, microcontrollers, FPGAs, and more. Over the years, as a hobbyist, I've bought them consistently. Overall, I’ve definitely bought much more than I’ve actually used.
Before LLMs came around, the friction in using them involved dealing with various compiled manuals and spec requirements. Just setting up the environment could consume the entire focus of the project before I even started writing the actual logic or code.
Now, I use cc to specifically handle those inner layers that aren't really part of the creative design. When it comes to what I actually want to build, the most interesting part is the creative process. I use cc to handle the environmental noise, things like linking errors, driver conflicts, register initialization failures, etc, so I can focus on the work itself.
LLM for researching and writing the code, while human handles the architectural decisions. This seems like the correct way to divide the work.
jstsch 10 hours ago [-]
Same! Set up a DIY home battery system and needed something to communicate with the inverter and battery over RS232, RS485 and bluetooth. Had an old Raspberry Pi-B with 512MB RAM lying around.
Running 3 python scripts as daemons would become a tad heavy and was a bit worried about serial timing. So used Codex to rewrite them in Go. Cross-compiled them, deployed the binaries and they're now running very comfortably on the pi. A few MB of ram per daemon. Making good use of an almost 15 year old device.
dvrp 14 hours ago [-]
As suspected, this project was possible, in part, because of LLM models. I have also been exploring in using LLMs for Gameboy Color testing.
There is something quite ironic about old hardware becoming increasingly useful due to current developments in AI research. I wonder about the repercussions of this.
userbinator 34 minutes ago [-]
There's another driver, HDADRV9J, which was around long before LLMs and works on Windows 3.x as well as 9x.
armada651 14 hours ago [-]
As AI companies buy up the supply of hardware, we'll be increasingly dependent on obsolete hardware. So it's a good solution for the problem it created.
SyneRyder 13 hours ago [-]
Before anyone gets the impression that the whole thing was done by AI (like I did), it seems this wasn't vibe coded, more AI-assisted:
"Generative AI (Large Language Models) have been used for research and debugging help in the course of this project. Small amounts of boilerplate code have also been written by LLM (C++ class and interface definitions). I do not intend to make this a "Vibe Coded" project. Pull requests automatically generated by a LLM tool will not be accepted."
grishka 12 hours ago [-]
AI did not enable anything in this project. According to the readme, it was only used a tiny bit. It would've been possible 100% without AI.
In fact, I had this exact idea on my list of potential future projects for a while so I could teach myself more low-level programming. I avoid AI like the plague in everything I work on, and I still very much considered it doable.
d3vnull 11 hours ago [-]
Everything is 100% possible without AI. AI just shifts the effort cost.
I have a huge stack of embedded development boards at home—all kinds of SBCs, microcontrollers, FPGAs, and more. Over the years, as a hobbyist, I've bought them consistently. Overall, I’ve definitely bought much more than I’ve actually used.
Before LLMs came around, the friction in using them involved dealing with various compiled manuals and spec requirements. Just setting up the environment could consume the entire focus of the project before I even started writing the actual logic or code.
Now, I use cc to specifically handle those inner layers that aren't really part of the creative design. When it comes to what I actually want to build, the most interesting part is the creative process. I use cc to handle the environmental noise, things like linking errors, driver conflicts, register initialization failures, etc, so I can focus on the work itself.
LLM for researching and writing the code, while human handles the architectural decisions. This seems like the correct way to divide the work.
Running 3 python scripts as daemons would become a tad heavy and was a bit worried about serial timing. So used Codex to rewrite them in Go. Cross-compiled them, deployed the binaries and they're now running very comfortably on the pi. A few MB of ram per daemon. Making good use of an almost 15 year old device.
There is something quite ironic about old hardware becoming increasingly useful due to current developments in AI research. I wonder about the repercussions of this.
"Generative AI (Large Language Models) have been used for research and debugging help in the course of this project. Small amounts of boilerplate code have also been written by LLM (C++ class and interface definitions). I do not intend to make this a "Vibe Coded" project. Pull requests automatically generated by a LLM tool will not be accepted."
In fact, I had this exact idea on my list of potential future projects for a while so I could teach myself more low-level programming. I avoid AI like the plague in everything I work on, and I still very much considered it doable.